Sample records for previously presented methods

  1. A New Lagrangian Relaxation Method Considering Previous Hour Scheduling for Unit Commitment Problem

    NASA Astrophysics Data System (ADS)

    Khorasani, H.; Rashidinejad, M.; Purakbari-Kasmaie, M.; Abdollahi, A.

    2009-08-01

    Generation scheduling is a crucial challenge in power systems especially under new environment of liberalization of electricity industry. A new Lagrangian relaxation method for unit commitment (UC) has been presented for solving generation scheduling problem. This paper focuses on the economical aspect of UC problem, while the previous hour scheduling as a very important issue is studied. In this paper generation scheduling of present hour has been conducted by considering the previous hour scheduling. The impacts of hot/cold start-up cost have been taken in to account in this paper. Case studies and numerical analysis presents significant outcomes while it demonstrates the effectiveness of the proposed method.

  2. Spoof Detection for Finger-Vein Recognition System Using NIR Camera.

    PubMed

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-10-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods.

  3. Spoof Detection for Finger-Vein Recognition System Using NIR Camera

    PubMed Central

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-01-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods. PMID:28974031

  4. Comparison of Past, Present, and Future Volume Estimation Methods for Tennessee

    Treesearch

    Stanley J. Zarnoch; Alexander Clark; Ray A. Souter

    2003-01-01

    Forest Inventory and Analysis 1999 survey data for Tennessee were used to compare stem-volume estimates obtained using a previous method, the current method, and newly developed taper models that will be used in the future. Compared to the current method, individual tree volumes were consistently underestimated with the previous method, especially for the hardwoods....

  5. Method to improve commercial bonded SOI material

    DOEpatents

    Maris, Humphrey John; Sadana, Devendra Kumar

    2000-07-11

    A method of improving the bonding characteristics of a previously bonded silicon on insulator (SOI) structure is provided. The improvement in the bonding characteristics is achieved in the present invention by, optionally, forming an oxide cap layer on the silicon surface of the bonded SOI structure and then annealing either the uncapped or oxide capped structure in a slightly oxidizing ambient at temperatures greater than 1200.degree. C. Also provided herein is a method for detecting the bonding characteristics of previously bonded SOI structures. According to this aspect of the present invention, a pico-second laser pulse technique is employed to determine the bonding imperfections of previously bonded SOI structures.

  6. Study of EEG during Sternberg Tasks with Different Direction of Arrangement for Letters

    NASA Astrophysics Data System (ADS)

    Kamihoriuchi, Kenji; Nuruki, Atsuo; Matae, Tadashi; Kurono, Asutsugu; Yunokuchi, Kazutomo

    In previous study, we recorded electroencephalogram (EEG) of patients with dementia and healthy subjects during Sternberg task. But, only one presentation method of Sternberg task was considered in previous study. Therefore, we examined whether the EEG was different in two different presentation methods wrote letters horizontally and wrote letters vertically in this study. We recorded EEG of six healthy subjects during Sternberg task using two different presentation methods. The result was not different in EEG topography of all subjects. In all subjects, correct rate increased in case of vertically arranged letters.

  7. Method and apparatus for modeling interactions

    DOEpatents

    Xavier, Patrick G.

    2002-01-01

    The present invention provides a method and apparatus for modeling interactions that overcomes drawbacks. The method of the present invention comprises representing two bodies undergoing translations by two swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention is more robust and allows faster modeling than previous methods.

  8. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    PubMed Central

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  9. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    PubMed

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  10. Improved Method for Prediction of Attainable Wing Leading-Edge Thrust

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; McElroy, Marcus O.; Lessard, Wendy B.; McCullers, L. Arnold

    1996-01-01

    Prediction of the loss of wing leading-edge thrust and the accompanying increase in drag due to lift, when flow is not completely attached, presents a difficult but commonly encountered problem. A method (called the previous method) for the prediction of attainable leading-edge thrust and the resultant effect on airplane aerodynamic performance has been in use for more than a decade. Recently, the method has been revised to enhance its applicability to current airplane design and evaluation problems. The improved method (called the present method) provides for a greater range of airfoil shapes from very sharp to very blunt leading edges. It is also based on a wider range of Reynolds numbers than was available for the previous method. The present method, when employed in computer codes for aerodynamic analysis, generally results in improved correlation with experimental wing-body axial-force data and provides reasonable estimates of the measured drag.

  11. Who Needs Replication?

    ERIC Educational Resources Information Center

    Porte, Graeme

    2013-01-01

    In this paper, the editor of a recent Cambridge University Press book on research methods discusses replicating previous key studies to throw more light on their reliability and generalizability. Replication research is presented as an accepted method of validating previous research by providing comparability between the original and replicated…

  12. The red supergiant population in the Perseus arm

    NASA Astrophysics Data System (ADS)

    Dorda, R.; Negueruela, I.; González-Fernández, C.

    2018-04-01

    We present a new catalogue of cool supergiants in a section of the Perseus arm, most of which had not been previously identified. To generate it, we have used a set of well-defined photometric criteria to select a large number of candidates (637) that were later observed at intermediate resolution in the infrared calcium triplet spectral range, using a long-slit spectrograph. To separate red supergiants from luminous red giants, we used a statistical method, developed in previous works and improved in the present paper. We present a method to assign probabilities of being a red supergiant to a given spectrum and use the properties of a population to generate clean samples, without contamination from lower luminosity stars. We compare our identification with a classification done using classical criteria and discuss their respective efficiencies and contaminations as identification methods. We confirm that our method is as efficient at finding supergiants as the best classical methods, but with a far lower contamination by red giants than any other method. The result is a catalogue with 197 cool supergiants, 191 of which did not appear in previous lists of red supergiants. This is the largest coherent catalogue of cool supergiants in the Galaxy.

  13. Simulation optimization of PSA-threshold based prostate cancer screening policies

    PubMed Central

    Zhang, Jingyu; Denton, Brian T.; Shah, Nilay D.; Inman, Brant A.

    2013-01-01

    We describe a simulation optimization method to design PSA screening policies based on expected quality adjusted life years (QALYs). Our method integrates a simulation model in a genetic algorithm which uses a probabilistic method for selection of the best policy. We present computational results about the efficiency of our algorithm. The best policy generated by our algorithm is compared to previously recommended screening policies. Using the policies determined by our model, we present evidence that patients should be screened more aggressively but for a shorter length of time than previously published guidelines recommend. PMID:22302420

  14. A method of extracting speed-dependent vector correlations from 2 + 1 REMPI ion images.

    PubMed

    Wei, Wei; Wallace, Colin J; Grubb, Michael P; North, Simon W

    2017-07-07

    We present analytical expressions for extracting Dixon's bipolar moments in the semi-classical limit from experimental anisotropy parameters of sliced or reconstructed non-sliced images. The current method focuses on images generated by 2 + 1 REMPI (Resonance Enhanced Multi-photon Ionization) and is a necessary extension of our previously published 1 + 1 REMPI equations. Two approaches for applying the new equations, direct inversion and forward convolution, are presented. As demonstration of the new method, bipolar moments were extracted from images of carbonyl sulfide (OCS) photodissociation at 230 nm and NO 2 photodissociation at 355 nm, and the results are consistent with previous publications.

  15. An extended Lagrangian method

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    1992-01-01

    A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. Unlike the Lagrangian method previously imposed which is valid only for supersonic flows, the present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multi-dimensional discontinuities with a high level of accuracy, similar to that found in one-dimensional problems.

  16. Modified conjugate gradient method for diagonalizing large matrices.

    PubMed

    Jie, Quanlin; Liu, Dunhuan

    2003-11-01

    We present an iterative method to diagonalize large matrices. The basic idea is the same as the conjugate gradient (CG) method, i.e, minimizing the Rayleigh quotient via its gradient and avoiding reintroducing errors to the directions of previous gradients. Each iteration step is to find lowest eigenvector of the matrix in a subspace spanned by the current trial vector and the corresponding gradient of the Rayleigh quotient, as well as some previous trial vectors. The gradient, together with the previous trial vectors, play a similar role as the conjugate gradient of the original CG algorithm. Our numeric tests indicate that this method converges significantly faster than the original CG method. And the computational cost of one iteration step is about the same as the original CG method. It is suitable for first principle calculations.

  17. Comparison study of two procedures for the determination of emamectin benzoate in medicated fish feed.

    PubMed

    Farer, Leslie J; Hayes, John M

    2005-01-01

    A new method has been developed for the determination of emamectin benzoate in fish feed. The method uses a wet extraction, cleanup by solid-phase extraction, and quantitation and separation by liquid chromatography (LC). In this paper, we compare the performance of this method with that of a previously reported LC assay for the determination of emamectin benzoate in fish feed. Although similar to the previous method, the new procedure uses a different sample pretreatment, wet extraction, and quantitation method. The performance of the new method was compared with that of the previously reported method by analyses of 22 medicated feed samples from various commercial sources. A comparison of the results presented here reveals slightly lower assay values obtained with the new method. Although a paired sample t-test indicates the difference in results is significant, this difference is within the method precision of either procedure.

  18. The application of contraction theory to an iterative formulation of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Brand, J. C.; Kauffman, J. F.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  19. RESEARCH ASSOCIATED WITH THE DEVELOPMENT OF EPA METHOD 552.2

    EPA Science Inventory

    The work presented in this paper entails the development of a method for haloacetic acid (HAA) analysis, Environmental Protection Agency (EPA)method 552.2, that improves the saftey and efficiency of previous methods and incorporates three additional trihalogenated acetic acids: b...

  20. Time series modeling of human operator dynamics in manual control tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.

  1. Time Series Modeling of Human Operator Dynamics in Manual Control Tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.

  2. Methodes entropiques appliquees au probleme inverse en magnetoencephalographie

    NASA Astrophysics Data System (ADS)

    Lapalme, Ervig

    2005-07-01

    This thesis is devoted to biomagnetic source localization using magnetoencephalography. This problem is known to have an infinite number of solutions. So methods are required to take into account anatomical and functional information on the solution. The work presented in this thesis uses the maximum entropy on the mean method to constrain the solution. This method originates from statistical mechanics and information theory. This thesis is divided into two main parts containing three chapters each. The first part reviews the magnetoencephalographic inverse problem: the theory needed to understand its context and the hypotheses for simplifying the problem. In the last chapter of this first part, the maximum entropy on the mean method is presented: its origins are explained and also how it is applied to our problem. The second part is the original work of this thesis presenting three articles; one of them already published and two others submitted for publication. In the first article, a biomagnetic source model is developed and applied in a theoretical con text but still demonstrating the efficiency of the method. In the second article, we go one step further towards a realistic modelization of the cerebral activation. The main priors are estimated using the magnetoencephalographic data. This method proved to be very efficient in realistic simulations. In the third article, the previous method is extended to deal with time signals thus exploiting the excellent time resolution offered by magnetoencephalography. Compared with our previous work, the temporal method is applied to real magnetoencephalographic data coming from a somatotopy experience and results agree with previous physiological knowledge about this kind of cognitive process.

  3. A Rapid Dialysis Method for Analysis of Artificial Sweeteners in Foods (2nd Report).

    PubMed

    Tahara, Shoichi; Yamamoto, Sumiyo; Yamajima, Yukiko; Miyakawa, Hiroyuki; Uematsu, Yoko; Monma, Kimio

    2017-01-01

    Following the previous report, a rapid dialysis method was developed for the extraction and purification of four artificial sweeteners, namely, sodium saccharide (Sa), acesulfame potassium (AK), aspartame (APM), and dulcin (Du), which are present in various foods. The method was evaluated by the addition of 0.02 g/kg of these sweeteners to a cookie sample, in the same manner as in the previous report. Revisions from the previous method were: reduction of the total dialysis volume from 200 to 100 mL, change of tube length from 55 to 50 cm, change of dialysate from 0.01 mol/L hydrochloric aqueous solution containing 10% sodium chloride to 30% methanol solution, and change of dialysis conditions from ambient temperature with occasional shaking to 50℃ with shaking at 160 rpm. As a result of these revisions, the recovery reached 99.3-103.8% with one hour dialysis. The obtained recovery yields were comparable to the recovery yields in the previous method with four hour dialysis.

  4. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  5. A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Qiang; Fan, Liang-Shih, E-mail: fan.1@osu.edu

    A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge–Kutta schemes in the coupled fluid–particle interaction. The major challenge to implement high-order Runge–Kutta schemes in the LBM is that the flow information suchmore » as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid–particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge–Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and −0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered. This finding may lead to more comprehensive studies of the effect of the particle rotation on fluid–solid drag laws. It is also demonstrated that, when the third-order or the fourth-order Runge–Kutta scheme is used, the numerical stability of the present IB-LBM is better than that of all methods in the literature, including the previous IB-LBMs and also the methods with the combination of the IBM and the traditional incompressible Navier–Stokes solver. - Highlights: • The IBM is embedded in the LBM using Runge–Kutta time schemes. • The effectiveness of the present IB-LBM is validated by benchmark applications. • For the first time, the IB-LBM achieves the second-order accuracy. • The numerical stability of the present IB-LBM is better than previous methods.« less

  6. Comparison of Methods for Determining Boundary Layer Edge Conditions for Transition Correlations

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.; Berry, Scott A.; Hollis, Brian R.; Horvath, Thomas J.

    2003-01-01

    Data previously obtained for the X-33 in the NASA Langley Research Center 20-Inch Mach 6 Air Tunnel have been reanalyzed to compare methods for determining boundary layer edge conditions for use in transition correlations. The experimental results were previously obtained utilizing the phosphor thermography technique to monitor the status of the boundary layer downstream of discrete roughness elements via global heat transfer images of the X-33 windward surface. A boundary layer transition correlation was previously developed for this data set using boundary layer edge conditions calculated using an inviscid/integral boundary layer approach. An algorithm was written in the present study to extract boundary layer edge quantities from higher fidelity viscous computational fluid dynamic solutions to develop transition correlations that account for viscous effects on vehicles of arbitrary complexity. The boundary layer transition correlation developed for the X-33 from the viscous solutions are compared to the previous boundary layer transition correlations. It is shown that the boundary layer edge conditions calculated using an inviscid/integral boundary layer approach are significantly different than those extracted from viscous computational fluid dynamic solutions. The present results demonstrate the differences obtained in correlating transition data using different computational methods.

  7. The complexity of classical music networks

    NASA Astrophysics Data System (ADS)

    Rolla, Vitor; Kestenberg, Juliano; Velho, Luiz

    2018-02-01

    Previous works suggest that musical networks often present the scale-free and the small-world properties. From a musician's perspective, the most important aspect missing in those studies was harmony. In addition to that, the previous works made use of outdated statistical methods. Traditionally, least-squares linear regression is utilised to fit a power law to a given data set. However, according to Clauset et al. such a traditional method can produce inaccurate estimates for the power law exponent. In this paper, we present an analysis of musical networks which considers the existence of chords (an essential element of harmony). Here we show that only 52.5% of music in our database presents the scale-free property, while 62.5% of those pieces present the small-world property. Previous works argue that music is highly scale-free; consequently, it sounds appealing and coherent. In contrast, our results show that not all pieces of music present the scale-free and the small-world properties. In summary, this research is focused on the relationship between musical notes (Do, Re, Mi, Fa, Sol, La, Si, and their sharps) and accompaniment in classical music compositions. More information about this research project is available at https://eden.dei.uc.pt/~vitorgr/MS.html.

  8. Survey Study of Trunk Materials for Direct ATRP Grafting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saito, Tomonori; Chatterjee, Sabornie; Johnson, Joseph C.

    2015-02-01

    In previous study, we demonstrated a new method to prepare polymeric fiber adsorbents via a chemical-grafting method, namely atom-transfer radical polymerization (ATRP), and identified parameters affecting their uranium adsorption capacity. However, ATRP chemical grafting in the previous study still utilized conventional radiation-induced graft polymerization (RIGP) to introduce initiation sites on fibers. Therefore, the objective of the present study is to perform survey study of trunk fiber materials for direct ATRP chemical grafting method without RIGP for the preparation of fiber adsorbents for uranium recovery from seawater.

  9. An Improved Method for Studying the Enzyme-Catalyzed Oxidation of Glucose Using Luminescent Probes

    ERIC Educational Resources Information Center

    Bare, William D.; Pham, Chi V.; Cuber, Matthew; Demas, J. N.

    2007-01-01

    A new method is presented for measuring the rate of the oxidation of glucose in the presence of glucose oxidase. The improved method employs luminescence measurements to directly determine the concentration of oxygen in real time, thus obviating complicated reaction schemes employed in previous methods. Our method has been used to determine…

  10. A numerical method for the stress analysis of stiffened-shell structures under nonuniform temperature distributions

    NASA Technical Reports Server (NTRS)

    Heldenfels, Richard R

    1951-01-01

    A numerical method is presented for the stress analysis of stiffened-shell structures of arbitrary cross section under nonuniform temperature distributions. The method is based on a previously published procedure that is extended to include temperature effects and multicell construction. The application of the method to practical problems is discussed and an illustrative analysis is presented of a two-cell box beam under the combined action of vertical loads and a nonuniform temperature distribution.

  11. On finite element methods for the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Aziz, A. K.; Werschulz, A. G.

    1979-01-01

    The numerical solution of the Helmholtz equation is considered via finite element methods. A two-stage method which gives the same accuracy in the computed gradient as in the computed solution is discussed. Error estimates for the method using a newly developed proof are given, and the computational considerations which show this method to be computationally superior to previous methods are presented.

  12. Molecular methods for diagnosis of odontogenic infections.

    PubMed

    Flynn, Thomas R; Paster, Bruce J; Stokes, Lauren N; Susarla, Srinivas M; Shanti, Rabie M

    2012-08-01

    Historically, the identification of microorganisms has been limited to species that could be cultured in the microbiology laboratory. The purpose of the present study was to apply molecular techniques to identify microorganisms in orofacial odontogenic infections (OIs). Specimens were obtained from subjects with clinical evidence of OI. To identify the microorganisms involved, 16S rRNA sequencing methods were used on clinical specimens. The name and number of the clones of each species identified and the combinations of species present were recorded for each subject. Descriptive statistics were computed for the study variables. Specimens of pus or wound fluid were obtained from 9 subjects. A mean of 7.4 ± 3.7 (standard deviation) species per case were identified. The predominant species detected in the present study that have previously been associated with OIs were Fusobacterium spp, Parvimonas micra, Porphyromonas endodontalis, and Prevotella oris. The predominant species detected in our study that have not been previously associated with OIs were Dialister pneumosintes and Eubacterium brachy. Unculturable phylotypes accounted for 24% of the species identified in our study. All species detected were obligate or facultative anaerobes. Streptococci were not detected. Molecular methods have enabled us to detect previously cultivated and not-yet-cultivated species in OIs; these methods could change our understanding of the pathogenic flora of orofacial OIs. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  13. OSCILLATOR STRENGTHS OF VIBRIONIC EXCITATIONS OF NITROGEN DETERMINED BY THE DIPOLE (γ, γ) METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ya-Wei; Kang, Xu; Xu, Long-Quan

    2016-03-10

    The oscillator strengths of the valence-shell excitations of molecular nitrogen have significant applicational values in studies of the Earth's atmosphere and interstellar gases. In this work, the absolute oscillator strengths of the valence-shell excitations of molecular nitrogen in 12.3–13.4 eV were measured by the novel dipole (γ, γ) method, in which the high-resolution inelastic X-ray scattering is operated at a negligibly small momentum transfer and can simulate the photoabsorption process. Because the experimental technique used in the present work is distinctly different from those used previously, the present experimental results give an independent cross-check to previous experimental and theoretical data.more » The excellent coincidence of the present results with the dipole (e, e) and those that were extrapolated indicates that the present oscillator strengths can serve as benchmark data.« less

  14. Development of a new flux splitting scheme

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1991-01-01

    The use of a new splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.

  15. Development of a new flux splitting scheme

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1991-01-01

    The successful use of a novel splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.

  16. Self-Tuning Threshold Method for Real-Time Gait Phase Detection Based on Ground Contact Forces Using FSRs.

    PubMed

    Tang, Jing; Zheng, Jianbin; Wang, Yang; Yu, Lie; Zhan, Enqi; Song, Qiuzhi

    2018-02-06

    This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM) sets a threshold to divide the ground contact forces (GCFs) into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA) that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs) were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold) were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA), which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM) and Lopez-Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.

  17. Precipitating Condensation Clouds in Substellar Atmospheres

    NASA Technical Reports Server (NTRS)

    Ackerman, Andrew S.; Marley, Mark S.; Gore, Warren J. (Technical Monitor)

    2000-01-01

    We present a method to calculate vertical profiles of particle size distributions in condensation clouds of giant planets and brown dwarfs. The method assumes a balance between turbulent diffusion and precipitation in horizontally uniform cloud decks. Calculations for the Jovian ammonia cloud are compared with previous methods. An adjustable parameter describing the efficiency of precipitation allows the new model to span the range of predictions from previous models. Calculations for the Jovian ammonia cloud are found to be consistent with observational constraints. Example calculations are provided for water, silicate, and iron clouds on brown dwarfs and on a cool extrasolar giant planet.

  18. Indispensable finite time corrections for Fokker-Planck equations from time series data.

    PubMed

    Ragwitz, M; Kantz, H

    2001-12-17

    The reconstruction of Fokker-Planck equations from observed time series data suffers strongly from finite sampling rates. We show that previously published results are degraded considerably by such effects. We present correction terms which yield a robust estimation of the diffusion terms, together with a novel method for one-dimensional problems. We apply these methods to time series data of local surface wind velocities, where the dependence of the diffusion constant on the state variable shows a different behavior than previously suggested.

  19. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  20. Multi-instrumental characterization of carbon nanotubes dispersed in aqueous solutions

    EPA Science Inventory

    Previous studies showed that the dispersion extent and physicochemical properties of carbon nanotubes are highly dependent upon the preparation methods (e.g., dispersion methods and dispersants). In the present work, multiwalled carbon nanotubes (MWNTs) are dispersed in aqueous s...

  1. An Investigation of Agility Issues in Scrum Teams Using Agility Indicators

    NASA Astrophysics Data System (ADS)

    Pikkarainen, Minna; Wang, Xiaofeng

    Agile software development methods have emerged and become increasingly popular in recent years; yet the issues encountered by software development teams that strive to achieve agility using agile methods are yet to be explored systematically. Built upon a previous study that has established a set of indicators of agility, this study investigates what issues are manifested in software development teams using agile methods. It is focussed on Scrum teams particularly. In other words, the goal of the chapter is to evaluate Scrum teams using agility indicators and therefore to further validate previously presented agility indicators within the additional cases. A multiple case study research method is employed. The findings of the study reveal that the teams using Scrum do not necessarily achieve agility in terms of team autonomy, sharing, stability and embraced uncertainty. The possible reasons include previous organizational plan-driven culture, resistance towards the Scrum roles and changing resources.

  2. Hypothesis testing on the fractal structure of behavioral sequences: the Bayesian assessment of scaling methodology.

    PubMed

    Moscoso del Prado Martín, Fermín

    2013-12-01

    I introduce the Bayesian assessment of scaling (BAS), a simple but powerful Bayesian hypothesis contrast methodology that can be used to test hypotheses on the scaling regime exhibited by a sequence of behavioral data. Rather than comparing parametric models, as typically done in previous approaches, the BAS offers a direct, nonparametric way to test whether a time series exhibits fractal scaling. The BAS provides a simpler and faster test than do previous methods, and the code for making the required computations is provided. The method also enables testing of finely specified hypotheses on the scaling indices, something that was not possible with the previously available methods. I then present 4 simulation studies showing that the BAS methodology outperforms the other methods used in the psychological literature. I conclude with a discussion of methodological issues on fractal analyses in experimental psychology. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  3. Interspeaker Variation in Habitual Speaking Rate: Additional Evidence

    ERIC Educational Resources Information Center

    Tsao, Ying-Chiao; Weismer, Gary; Iqbal, Kamran

    2006-01-01

    Purpose: The purpose of the present study was to test the hypothesis that talkers previously classified by Y.-C. Tsao and G. Weismer (1997) as habitually fast versus habitually slow would show differences in the way they manipulated articulation rate across the rate continuum. Method: Thirty talkers previously classified by Tsao and Weismer (1997)…

  4. Counteracting moment device for reduction of earthquake-induced excursions of multi-level buildings.

    PubMed

    Nagaya, K; Fukushima, T; Kosugi, Y

    1999-05-01

    A vibration-control mechanism for beams and columns was presented in our previous report in which the earthquake force was transformed into a vibration-control force by using a gear train mechanism. In our previous report, however, only the principle of transforming the earthquake force into the control force was presented; the discussion for real structures and the design method were not presented. The present article provides a theoretical analysis of the column which is used in multi-layered buildings. Experimental tests were carried out for a model of multi-layered buildings in the frequency range of a principal earthquake wave. Theoretical results are compared to the experimental data. The optimal design of the control mechanism, which is of importance in the column design, is presented. Numerical calculations are carried out for the optimal design. It is shown that vibrations of the column involving the mechanism are suppressed remarkably. The optimal design method and the analytical results are applicable to the design of the column.

  5. A Cash Management Model.

    ERIC Educational Resources Information Center

    Boyles, William W.

    1975-01-01

    In 1973, Ronald G. Lykins presented a model for cash management and analysed its benefits for Ohio University. This paper attempts to expand on the previous method by providing answers to questions raised by the Lykins methods by a series of simple algebraic formulas. Both methods are based on two premises: (1) all cash over which the business…

  6. Nested PCR and RFLP analysis based on the 16S rRNA gene

    USDA-ARS?s Scientific Manuscript database

    Current phytoplasma detection and identification method is primarily based on nested PCR followed by restriction fragment length polymorphism analysis and gel electrophoresis. This method can potentially detect and differentiate all phytoplasmas including those previously not described. The present ...

  7. A case report: using SNOMED CT for grouping Adverse Drug Reactions Terms

    PubMed Central

    Alecu, Iulian; Bousquet, Cedric; Jaulent, Marie-Christine

    2008-01-01

    Background WHO-ART and MedDRA are medical terminologies used for the coding of adverse drug reactions in pharmacovigilance databases. MedDRA proposes 13 Special Search Categories (SSC) grouping terms associated to specific medical conditions. For instance, the SSC "Haemorrhage" includes 346 MedDRA terms among which 55 are also WHO-ART terms. WHO-ART itself does not provide such groupings. Our main contention is the possibility of classifying WHO-ART terms in semantic categories by using knowledge extracted from SNOMED CT. A previous paper presents the way WHO-ART term definitions have been automatically generated in a description logics formalism by using their corresponding SNOMED CT synonyms. Based on synonymy and relative position of WHO-ART terms in SNOMED CT, specialization or generalization relationships could be inferred. This strategy is successful for grouping the WHO-ART terms present in most MedDRA SSCs. However the strategy failed when SSC were organized on other basis than taxonomy. Methods We propose a new method that improves the previous WHO-ART structure by integrating the associative relationships included in SNOMED CT. Results The new method improves the groupings. For example, none of the 55 WHO-ART terms in the Haemorrhage SSC were matched using the previous method. With the new method, we improve the groupings and obtain 87% coverage of the Haemorrhage SSC. Conclusion SNOMED CT's terminological structure can be used to perform automated groupings in WHO-ART. This work proves that groupings already present in the MedDRA SSCs (e.g. the haemorrhage SSC) may be retrieved using classification in SNOMED CT. PMID:19007441

  8. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  9. Rapid determination of the isomeric truxillines in illicit cocaine via capillary gas chromatography/flame ionization detection and their use and implication in the determination of cocaine origin and trafficking routes.

    PubMed

    Mallette, Jennifer R; Casale, John F

    2014-10-17

    The isomeric truxillines are a group of minor alkaloids present in all illicit cocaine samples. The relative amount of truxillines in cocaine is indicative of the variety of coca used for cocaine processing, and thus, is useful in source determination. Previously, the determination of isomeric truxillines in cocaine was performed with a gas chromatography/electron capture detection method. However, due to the tedious sample preparation as well as the expense and maintenance required of electron capture detectors, the protocol was converted to a gas chromatography/flame-ionization detection method. Ten truxilline isomers (alpha-, beta-, delta-, epsilon-, gamma-, omega, zeta-, peri-, neo-, and epi-) were quantified relative to a structurally related internal standard, 4',4″-dimethyl-α-truxillic acid dimethyl ester. The method was shown to have a linear response from 0.001 to 1.00 mg/mL and a lower detection limit of 0.001 mg/mL. In this method, the truxillines are directly reduced with lithium aluminum hydride and then acylated with heptafluorobutyric anhydride prior to analysis. The analysis of more than 100 cocaine hydrochloride samples is presented and compared to data obtained by the previous methodology. Authentic cocaine samples obtained from the source countries of Colombia, Bolivia, and Peru were also analyzed, and comparative data on more than 23,000 samples analyzed over the past 10 years with the previous methodology is presented. Published by Elsevier B.V.

  10. High energy PIXE: A tool to characterize multi-layer thick samples

    NASA Astrophysics Data System (ADS)

    Subercaze, A.; Koumeir, C.; Métivier, V.; Servagent, N.; Guertin, A.; Haddad, F.

    2018-02-01

    High energy PIXE is a useful and non-destructive tool to characterize multi-layer thick samples such as cultural heritage objects. In a previous work, we demonstrated the possibility to perform quantitative analysis of simple multi-layer samples using high energy PIXE, without any assumption on their composition. In this work an in-depth study of the parameters involved in the method previously published is proposed. Its extension to more complex samples with a repeated layer is also presented. Experiments have been performed at the ARRONAX cyclotron using 68 MeV protons. The thicknesses and sequences of a multi-layer sample including two different layers of the same element have been determined. Performances and limits of this method are presented and discussed.

  11. D-Wave Electron-H, -He+, and -Li2+ Elastic Scattering and Photoabsorption in P States of Two-Electron Systems

    NASA Technical Reports Server (NTRS)

    Bhatia, A. K.

    2014-01-01

    In previous papers [A. K. Bhatia, Phys. Rev. A 85, 052708 (2012); 86, 032709 (2012); 87, 042705 (2013)] electron-H, -He+, and -Li2+ P-wave scattering phase shifts were calculated using the variational polarized orbital theory. This method is now extended to the singlet and triplet D-wave scattering in the elastic region. The long-range correlations are included in the Schrodinger equation by using the method of polarized orbitals variationally. Phase shifts are compared to those obtained by other methods. The present calculation provides results which are rigorous lower bonds to the exact phase shifts. Using the presently calculated D-wave and previously calculated S-wave continuum functions, photoionization of singlet and triplet P states of He and Li+ are also calculated, along with the radiative recombination rate coefficients at various electron temperatures.

  12. On the convergence of an iterative formulation of the electromagnetic scattering from an infinite grating of thin wires

    NASA Technical Reports Server (NTRS)

    Brand, J. C.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. The mathematical background for formulating an iterative equation is covered using straightforward single variable examples including an extension to vector spaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  13. How Seductive Are Decorative Elements in Learning Materials?

    ERIC Educational Resources Information Center

    Rey, Gunter Daniel

    2012-01-01

    The seductive detail effect arises when people learn more deeply from a multimedia presentation when interesting but irrelevant adjuncts are excluded. However, previous studies about this effect are rather inconclusive and contained various methodical problems. The recent experiment attempted to overcome these methodical problems. Undergraduate…

  14. Solving Fluid Structure Interaction Problems with an Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.

    2016-01-01

    An immersed boundary method for the compressible Navier-Stokes equations can be used for moving boundary problems as well as fully coupled fluid-structure interaction is presented. The underlying Cartesian immersed boundary method of the Launch Ascent and Vehicle Aerodynamics (LAVA) framework, based on the locally stabilized immersed boundary method previously presented by the authors, is extended to account for unsteady boundary motion and coupled to linear and geometrically nonlinear structural finite element solvers. The approach is validated for moving boundary problems with prescribed body motion and fully coupled fluid structure interaction problems. Keywords: Immersed Boundary Method, Higher-Order Finite Difference Method, Fluid Structure Interaction.

  15. Telomerecat: A ploidy-agnostic method for estimating telomere length from whole genome sequencing data.

    PubMed

    Farmery, James H R; Smith, Mike L; Lynch, Andy G

    2018-01-22

    Telomere length is a risk factor in disease and the dynamics of telomere length are crucial to our understanding of cell replication and vitality. The proliferation of whole genome sequencing represents an unprecedented opportunity to glean new insights into telomere biology on a previously unimaginable scale. To this end, a number of approaches for estimating telomere length from whole-genome sequencing data have been proposed. Here we present Telomerecat, a novel approach to the estimation of telomere length. Previous methods have been dependent on the number of telomeres present in a cell being known, which may be problematic when analysing aneuploid cancer data and non-human samples. Telomerecat is designed to be agnostic to the number of telomeres present, making it suited for the purpose of estimating telomere length in cancer studies. Telomerecat also accounts for interstitial telomeric reads and presents a novel approach to dealing with sequencing errors. We show that Telomerecat performs well at telomere length estimation when compared to leading experimental and computational methods. Furthermore, we show that it detects expected patterns in longitudinal data, repeated measurements, and cross-species comparisons. We also apply the method to a cancer cell data, uncovering an interesting relationship with the underlying telomerase genotype.

  16. The Fentanyl Patch Boil-Up - A Novel Method of Opioid Abuse.

    PubMed

    Schauer, Cameron K M W; Shand, James A D; Reynolds, Thomas M

    2015-11-01

    Fentanyl is a potent opioid analgesic used in the treatment of pain. Transdermal fentanyl patches are now widely utilized as an acceptable and efficacious method of medication delivery. Unfortunately, the potential for their abuse is well recognized. Previous case reports have documented deaths after intravenous (IV) misuse of fentanyl which had been extracted from Duragesic (liquid reservoir type) patches. We present a case of IV fentanyl abuse after the extraction from a Mylan (matrix type) patch. This method of abuse has not previously been described in the literature. © 2015 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  17. Transition-density-fragment interaction combined with transfer integral approach for excitation-energy transfer via charge-transfer states

    NASA Astrophysics Data System (ADS)

    Fujimoto, Kazuhiro J.

    2012-07-01

    A transition-density-fragment interaction (TDFI) combined with a transfer integral (TI) method is proposed. The TDFI method was previously developed for describing electronic Coulomb interaction, which was applied to excitation-energy transfer (EET) [K. J. Fujimoto and S. Hayashi, J. Am. Chem. Soc. 131, 14152 (2009)] and exciton-coupled circular dichroism spectra [K. J. Fujimoto, J. Chem. Phys. 133, 124101 (2010)]. In the present study, the TDFI method is extended to the exchange interaction, and hence it is combined with the TI method for applying to the EET via charge-transfer (CT) states. In this scheme, the overlap correction is also taken into account. To check the TDFI-TI accuracy, several test calculations are performed to an ethylene dimer. As a result, the TDFI-TI method gives a much improved description of the electronic coupling, compared with the previous TDFI method. Based on the successful description of the electronic coupling, the decomposition analysis is also performed with the TDFI-TI method. The present analysis clearly shows a large contribution from the Coulomb interaction in most of the cases, and a significant influence of the CT states at the small separation. In addition, the exchange interaction is found to be small in this system. The present approach is useful for analyzing and understanding the mechanism of EET.

  18. New decision criteria for selecting delta check methods based on the ratio of the delta difference to the width of the reference range can be generally applicable for each clinical chemistry test item.

    PubMed

    Park, Sang Hyuk; Kim, So-Young; Lee, Woochang; Chun, Sail; Min, Won-Ki

    2012-09-01

    Many laboratories use 4 delta check methods: delta difference, delta percent change, rate difference, and rate percent change. However, guidelines regarding decision criteria for selecting delta check methods have not yet been provided. We present new decision criteria for selecting delta check methods for each clinical chemistry test item. We collected 811,920 and 669,750 paired (present and previous) test results for 27 clinical chemistry test items from inpatients and outpatients, respectively. We devised new decision criteria for the selection of delta check methods based on the ratio of the delta difference to the width of the reference range (DD/RR). Delta check methods based on these criteria were compared with those based on the CV% of the absolute delta difference (ADD) as well as those reported in 2 previous studies. The delta check methods suggested by new decision criteria based on the DD/RR ratio corresponded well with those based on the CV% of the ADD except for only 2 items each in inpatients and outpatients. Delta check methods based on the DD/RR ratio also corresponded with those suggested in the 2 previous studies, except for 1 and 7 items in inpatients and outpatients, respectively. The DD/RR method appears to yield more feasible and intuitive selection criteria and can easily explain changes in the results by reflecting both the biological variation of the test item and the clinical characteristics of patients in each laboratory. We suggest this as a measure to determine delta check methods.

  19. Methods for Automating Analysis of Glacier Morphology for Regional Modelling: Centerlines, Extensions, and Elevation Bands

    NASA Astrophysics Data System (ADS)

    Viger, R. J.; Van Beusekom, A. E.

    2016-12-01

    The treatment of glaciers in modeling requires information about their shape and extent. This presentation discusses new methods and their application in a new glacier-capable variant of the USGS PRMS model, a physically-based, spatially distributed daily time-step model designed to simulate the runoff and evolution of glaciers through time. In addition to developing parameters describing PRMS land surfaces (hydrologic response units, HRUs), several of the analyses and products are likely of interest to cryospheric science community in general. The first method is a (fully automated) variation of logic previously presented in the literature for definition of the glacier centerline. Given that the surface of a glacier might be convex, using traditional topographic analyses based on a DEM to trace a path down the glacier is not reliable. Instead a path is derived based on a cost function. Although only a single path is presented in our results, the method can be easily modified to delineate a branched network of centerlines for each glacier. The second method extends the glacier terminus downslope by an arbitrary distance, according to local surface topography. This product is can be used to explore possible, if unlikely, scenarios under which glacier area grows. More usefully, this method can be used to approximate glacier extents from previous years without needing historical imagery. The final method presents an approach for segmenting the glacier into altitude-based HRUs. Successful integration of this information with traditional approaches for discretizing the non-glacierized portions of a basin requires several additional steps. These include synthesizing the glacier centerline network with one developed with a traditional DEM analysis, ensuring that flow can be routed under and beyond glaciers to a basin outlet. Results are presented based on analysis of the Copper River Basin, Alaska.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, John Nicolas; Fish, Jacob; Waisman, Haim

    Two heuristic strategies intended to enhance the performance of the generalized global basis (GGB) method [H. Waisman, J. Fish, R.S. Tuminaro, J. Shadid, The Generalized Global Basis (GGB) method, International Journal for Numerical Methods in Engineering 61(8), 1243-1269] applied to nonlinear systems are presented. The standard GGB accelerates a multigrid scheme by an additional coarse grid correction that filters out slowly converging modes. This correction requires a potentially costly eigen calculation. This paper considers reusing previously computed eigenspace information. The GGB? scheme enriches the prolongation operator with new eigenvectors while the modified method (MGGB) selectively reuses the same prolongation. Bothmore » methods use the criteria of principal angles between subspaces spanned between the previous and current prolongation operators. Numerical examples clearly indicate significant time savings in particular for the MGGB scheme.« less

  1. The study on the parallel processing based time series correlation analysis of RBC membrane flickering in quantitative phase imaging

    NASA Astrophysics Data System (ADS)

    Lee, Minsuk; Won, Youngjae; Park, Byungjun; Lee, Seungrag

    2017-02-01

    Not only static characteristics but also dynamic characteristics of the red blood cell (RBC) contains useful information for the blood diagnosis. Quantitative phase imaging (QPI) can capture sample images with subnanometer scale depth resolution and millisecond scale temporal resolution. Various researches have been used QPI for the RBC diagnosis, and recently many researches has been developed to decrease the process time of RBC information extraction using QPI by the parallel computing algorithm, however previous studies are interested in the static parameters such as morphology of the cells or simple dynamic parameters such as root mean square (RMS) of the membrane fluctuations. Previously, we presented a practical blood test method using the time series correlation analysis of RBC membrane flickering with QPI. However, this method has shown that there is a limit to the clinical application because of the long computation time. In this study, we present an accelerated time series correlation analysis of RBC membrane flickering using the parallel computing algorithm. This method showed consistent fractal scaling exponent results of the surrounding medium and the normal RBC with our previous research.

  2. A Sub-filter Scale Noise Equation far Hybrid LES Simulations

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2006-01-01

    Hybrid LES/subscale modeling approaches have an important advantage over the current noise prediction methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence . Previous hybrid approaches use approximate statistical techniques or extrapolation methods to obtain the requisite information about the sub-filter scale motion. An alternative approach would be to adopt the modeling techniques used in the current noise prediction methods and determine the unknown stresses from experimental data. The present paper derives an equation for predicting the sub scale sound from information that can be obtained with currently available experimental procedures. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid techniques.

  3. The rank correlated FSK model for prediction of gas radiation in non-uniform media, and its relationship to the rank correlated SLW model

    NASA Astrophysics Data System (ADS)

    Solovjov, Vladimir P.; Webb, Brent W.; Andre, Frederic

    2018-07-01

    Following previous theoretical development based on the assumption of a rank correlated spectrum, the Rank Correlated Full Spectrum k-distribution (RC-FSK) method is proposed. The method proves advantageous in modeling radiation transfer in high temperature gases in non-uniform media in two important ways. First, and perhaps most importantly, the method requires no specification of a reference gas thermodynamic state. Second, the spectral construction of the RC-FSK model is simpler than original correlated FSK models, requiring only two cumulative k-distributions. Further, although not exhaustive, example problems presented here suggest that the method may also yield improved accuracy relative to prior methods, and may exhibit less sensitivity to the blackbody source temperature used in the model predictions. This paper outlines the theoretical development of the RC-FSK method, comparing the spectral construction with prior correlated spectrum FSK method formulations. Further the RC-FSK model's relationship to the Rank Correlated Spectral Line Weighted-sum-of-gray-gases (RC-SLW) model is defined. The work presents predictions using the Rank Correlated FSK method and previous FSK methods in three different example problems. Line-by-line benchmark predictions are used to assess the accuracy.

  4. Supercritical wing sections 2, volume 108

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Garabedian, P.; Korn, D.; Jameson, A.; Beckmann, M. (Editor); Kuenzi, H. P. (Editor)

    1975-01-01

    A mathematical theory for the design and analysis of supercritical wing sections was previously presented. Examples and computer programs showing how this method works were included. The work on transonics is presented in a more definitive form. For design, a better model of the trailing edge is introduced which should eliminate a loss of fifteen or twenty percent in lift experienced with previous heavily aft loaded models, which is attributed to boundary layer separation. How drag creep can be reduced at off-design conditions is indicated. A rotated finite difference scheme is presented that enables the application of Murman's method of analysis in more or less arbitrary curvilinear coordinate systems. This allows the use of supersonic as well as subsonic free stream Mach numbers and to capture shock waves as far back on an airfoil as desired. Moreover, it leads to an effective three dimensional program for the computation of transonic flow past an oblique wing. In the case of two dimensional flow, the method is extended to take into account the displacement thickness computed by a semi-empirical turbulent boundary layer correction.

  5. The Importance of Training and Previous Contact in University Students' Opinion about Persons with Mental Disorder

    ERIC Educational Resources Information Center

    Barroso-Hurtado, Domingo; Mendo-Lázaro, Santiago

    2016-01-01

    Introduction: The present study analyzes differences in university students' opinions towards persons with mental disorder, as a function of whether they have had previous contact with them and whether they have received training about them. Method: The Opinions about Mental Illness Scale for Spanish population (OMI-S) was applied to a sample of…

  6. Researcher’s Perspective of Substitution Method on Text Steganography

    NASA Astrophysics Data System (ADS)

    Zamir Mansor, Fawwaz; Mustapha, Aida; Azah Samsudin, Noor

    2017-08-01

    The linguistic steganography studies are still in the stage of development and empowerment practices. This paper will present several text steganography on substitution methods based on the researcher’s perspective, all scholar paper will analyse and compared. The objective of this paper is to give basic information in the substitution method of text domain steganography that has been applied by previous researchers. The typical ways of this method also will be identified in this paper to reveal the most effective method in text domain steganography. Finally, the advantage of the characteristic and drawback on these techniques in generally also presented in this paper.

  7. Computer analysis of multicircuit shells of revolution by the field method

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1975-01-01

    The field method, presented previously for the solution of even-order linear boundary value problems defined on one-dimensional open branch domains, is extended to boundary value problems defined on one-dimensional domains containing circuits. This method converts the boundary value problem into two successive numerically stable initial value problems, which may be solved by standard forward integration techniques. In addition, a new method for the treatment of singular boundary conditions is presented. This method, which amounts to a partial interchange of the roles of force and displacement variables, is problem independent with respect to both accuracy and speed of execution. This method was implemented in a computer program to calculate the static response of ring stiffened orthotropic multicircuit shells of revolution to asymmetric loads. Solutions are presented for sample problems which illustrate the accuracy and efficiency of the method.

  8. Qualitative Data Analysis: A Methods Sourcebook. Third Edition

    ERIC Educational Resources Information Center

    Miles, Matthew B.; Huberman, A. Michael; Saldana, Johnny

    2014-01-01

    The Third Edition of Miles & Huberman's classic research methods text is updated and streamlined by Johnny Saldaña, author of "The Coding Manual for Qualitative Researchers." Several of the data display strategies from previous editions are now presented in re-envisioned and reorganized formats to enhance reader accessibility and…

  9. Measurement of electron density using reactance cutoff probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, K. H.; Seo, B. H.; Kim, J. H.

    2016-05-15

    This paper proposes a new measurement method of electron density using the reactance spectrum of the plasma in the cutoff probe system instead of the transmission spectrum. The highly accurate reactance spectrum of the plasma-cutoff probe system, as expected from previous circuit simulations [Kim et al., Appl. Phys. Lett. 99, 131502 (2011)], was measured using the full two-port error correction and automatic port extension methods of the network analyzer. The electron density can be obtained from the analysis of the measured reactance spectrum, based on circuit modeling. According to the circuit simulation results, the reactance cutoff probe can measure themore » electron density more precisely than the previous cutoff probe at low densities or at higher pressure. The obtained results for the electron density are presented and discussed for a wide range of experimental conditions, and this method is compared with previous methods (a cutoff probe using the transmission spectrum and a single Langmuir probe).« less

  10. Solution and reasoning reuse in space planning and scheduling applications

    NASA Technical Reports Server (NTRS)

    Verfaillie, Gerard; Schiex, Thomas

    1994-01-01

    In the space domain, as in other domains, the CSP (Constraint Satisfaction Problems) techniques are increasingly used to represent and solve planning and scheduling problems. But these techniques have been developed to solve CSP's which are composed of fixed sets of variables and constraints, whereas many planning and scheduling problems are dynamic. It is therefore important to develop methods which allow a new solution to be rapidly found, as close as possible to the previous one, when some variables or constraints are added or removed. After presenting some existing approaches, this paper proposes a simple and efficient method, which has been developed on the basis of the dynamic backtracking algorithm. This method allows previous solution and reasoning to be reused in the framework of a CSP which is close to the previous one. Some experimental results on general random CSPs and on operation scheduling problems for remote sensing satellites are given.

  11. Degree of Approximation by a General Cλ -Summability Method

    NASA Astrophysics Data System (ADS)

    Sonker, S.; Munjal, A.

    2018-03-01

    In the present study, two theorems explaining the degree of approximation of signals belonging to the class Lip(α, p, w) by a more general C λ -method (Summability method) have been formulated. Improved estimations have been observed in terms of λ(n) where (λ(n))‑α ≤ n ‑α for 0 < α ≤ 1 as compared to previous studies presented in terms of n. These estimations of infinite matrices are very much applicable in solid state physics which further motivates for an investigation of perturbations of matrix valued functions.

  12. Examination of the Equivalence of Self-Report Survey-Based Paper-and-Pencil and Internet Data Collection Methods

    ERIC Educational Resources Information Center

    Weigold, Arne; Weigold, Ingrid K.; Russell, Elizabeth J.

    2013-01-01

    Self-report survey-based data collection is increasingly carried out using the Internet, as opposed to the traditional paper-and-pencil method. However, previous research on the equivalence of these methods has yielded inconsistent findings. This may be due to methodological and statistical issues present in much of the literature, such as…

  13. A Novel WA-BPM Based on the Generalized Multistep Scheme in the Propagation Direction in the Waveguide

    NASA Astrophysics Data System (ADS)

    Ji, Yang; Chen, Hong; Tang, Hongwu

    2017-06-01

    A highly accurate wide-angle scheme, based on the generalized mutistep scheme in the propagation direction, is developed for the finite difference beam propagation method (FD-BPM). Comparing with the previously presented method, the simulation shows that our method results in a more accurate solution, and the step size can be much larger

  14. Separation and quantification of monothiols and phytochelatins from a wide variety of cell cultures and tissues of trees and other plants using high performance liquid chromatography

    Treesearch

    Rakesh Minocha; P. Thangavel; Om Parkash Dhankher; Stephanie Long

    2008-01-01

    The HPLC method presented here for the quantification of metal-binding thiols is considerably shorter than most previously published methods. It is a sensitive and highly reproducible method that separates monobromobimane tagged monothiols (cysteine, glutathione, γ-glutamylcysteine) along with polythiols (PC2, PC3...

  15. A case report: using SNOMED CT for grouping Adverse Drug Reactions Terms.

    PubMed

    Alecu, Iulian; Bousquet, Cedric; Jaulent, Marie-Christine

    2008-10-27

    WHO-ART and MedDRA are medical terminologies used for the coding of adverse drug reactions in pharmacovigilance databases. MedDRA proposes 13 Special Search Categories (SSC) grouping terms associated to specific medical conditions. For instance, the SSC "Haemorrhage" includes 346 MedDRA terms among which 55 are also WHO-ART terms. WHO-ART itself does not provide such groupings. Our main contention is the possibility of classifying WHO-ART terms in semantic categories by using knowledge extracted from SNOMED CT. A previous paper presents the way WHO-ART term definitions have been automatically generated in a description logics formalism by using their corresponding SNOMED CT synonyms. Based on synonymy and relative position of WHO-ART terms in SNOMED CT, specialization or generalization relationships could be inferred. This strategy is successful for grouping the WHO-ART terms present in most MedDRA SSCs. However the strategy failed when SSC were organized on other basis than taxonomy. We propose a new method that improves the previous WHO-ART structure by integrating the associative relationships included in SNOMED CT. The new method improves the groupings. For example, none of the 55 WHO-ART terms in the Haemorrhage SSC were matched using the previous method. With the new method, we improve the groupings and obtain 87% coverage of the Haemorrhage SSC. SNOMED CT's terminological structure can be used to perform automated groupings in WHO-ART. This work proves that groupings already present in the MedDRA SSCs (e.g. the haemorrhage SSC) may be retrieved using classification in SNOMED CT.

  16. Nanoparticle-assisted laser desorption/ionization mass spectrometry: Novel sample preparation methods and nanoparticle screening for plant metabolite imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yagnik, Gargey B.

    The main goal of the presented research is development of nanoparticle based matrix-assisted laser desorption ionization-mass spectrometry (MALDI-MS). This dissertation includes the application of previously developed data acquisition methods, development of novel sample preparation methods, application and comparison of novel nanoparticle matrices, and comparison of two nanoparticle matrix application methods for MALDI-MS and MALDI-MS imaging.

  17. GStream: Improving SNP and CNV Coverage on Genome-Wide Association Studies

    PubMed Central

    Alonso, Arnald; Marsal, Sara; Tortosa, Raül; Canela-Xandri, Oriol; Julià, Antonio

    2013-01-01

    We present GStream, a method that combines genome-wide SNP and CNV genotyping in the Illumina microarray platform with unprecedented accuracy. This new method outperforms previous well-established SNP genotyping software. More importantly, the CNV calling algorithm of GStream dramatically improves the results obtained by previous state-of-the-art methods and yields an accuracy that is close to that obtained by purely CNV-oriented technologies like Comparative Genomic Hybridization (CGH). We demonstrate the superior performance of GStream using microarray data generated from HapMap samples. Using the reference CNV calls generated by the 1000 Genomes Project (1KGP) and well-known studies on whole genome CNV characterization based either on CGH or genotyping microarray technologies, we show that GStream can increase the number of reliably detected variants up to 25% compared to previously developed methods. Furthermore, the increased genome coverage provided by GStream allows the discovery of CNVs in close linkage disequilibrium with SNPs, previously associated with disease risk in published Genome-Wide Association Studies (GWAS). These results could provide important insights into the biological mechanism underlying the detected disease risk association. With GStream, large-scale GWAS will not only benefit from the combined genotyping of SNPs and CNVs at an unprecedented accuracy, but will also take advantage of the computational efficiency of the method. PMID:23844243

  18. Macroscopic relationship in primal-dual portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-02-01

    In the present paper, using a replica analysis, we examine the portfolio optimization problem handled in previous work and discuss the minimization of investment risk under constraints of budget and expected return for the case that the distribution of the hyperparameters of the mean and variance of the return rate of each asset are not limited to a specific probability family. Findings derived using our proposed method are compared with those in previous work to verify the effectiveness of our proposed method. Further, we derive a Pythagorean theorem of the Sharpe ratio and macroscopic relations of opportunity loss. Using numerical experiments, the effectiveness of our proposed method is demonstrated for a specific situation.

  19. Wet Scrubber System Study. Volume I. Scrubber Handbook.

    ERIC Educational Resources Information Center

    Calvert, Seymour; And Others

    This handbook brings together previously scattered materials and clarifies their applicability to scrubber technology. The various aspects of scrubber use and present engineering design methods are reviewed, and actual experience on hundreds of scrubber installations in various industries is presented in a condensed form. Many related topics such…

  20. A Resource-Allocation Theory of Classroom Management.

    ERIC Educational Resources Information Center

    McDonald, Frederick J.

    A fresh approach to classroom management, which responds both to the present body of knowledge in this area and extends to beginning teachers a practical, flexible, and simple method of maintaining classroom control, is presented. Shortcomings of previous management theories (in particular, the Direct Instruction Model) are discussed, and the need…

  1. A Delphi forecast of technology in education

    NASA Technical Reports Server (NTRS)

    Robinson, B. E.

    1973-01-01

    The results are reported of a Delphi forecast of the utilization and social impacts of large-scale educational telecommunications technology. The focus is on both forecasting methodology and educational technology. The various methods of forecasting used by futurists are analyzed from the perspective of the most appropriate method for a prognosticator of educational technology, and review and critical analysis are presented of previous forecasts and studies. Graphic responses, summarized comments, and a scenario of education in 1990 are presented.

  2. TLE uncertainty estimation using robust weighted differencing

    NASA Astrophysics Data System (ADS)

    Geul, Jacco; Mooij, Erwin; Noomen, Ron

    2017-05-01

    Accurate knowledge of satellite orbit errors is essential for many types of analyses. Unfortunately, for two-line elements (TLEs) this is not available. This paper presents a weighted differencing method using robust least-squares regression for estimating many important error characteristics. The method is applied to both classic and enhanced TLEs, compared to previous implementations, and validated using Global Positioning System (GPS) solutions for the GOCE satellite in Low-Earth Orbit (LEO), prior to its re-entry. The method is found to be more accurate than previous TLE differencing efforts in estimating initial uncertainty, as well as error growth. The method also proves more reliable and requires no data filtering (such as outlier removal). Sensitivity analysis shows a strong relationship between argument of latitude and covariance (standard deviations and correlations), which the method is able to approximate. Overall, the method proves accurate, computationally fast, and robust, and is applicable to any object in the satellite catalogue (SATCAT).

  3. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  4. A Computer Game-Based Method for Studying Bullying and Cyberbullying

    ERIC Educational Resources Information Center

    Mancilla-Caceres, Juan F.; Espelage, Dorothy; Amir, Eyal

    2015-01-01

    Even though previous studies have addressed the relation between face-to-face bullying and cyberbullying, none have studied both phenomena simultaneously. In this article, we present a computer game-based method to study both types of peer aggression among youth. Study participants included fifth graders (N = 93) in two U.S. Midwestern middle…

  5. Reasoning Maps: A Generally Applicable Method for Characterizing Hypothesis-Testing Behaviour. Research Report

    ERIC Educational Resources Information Center

    White, Brian

    2004-01-01

    This paper presents a generally applicable method for characterizing subjects' hypothesis-testing behaviour based on a synthesis that extends on previous work. Beginning with a transcript of subjects' speech and videotape of their actions, a Reasoning Map is created that depicts the flow of their hypotheses, tests, predictions, results, and…

  6. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  7. Center index method-an alternative for wear measurements with radiostereometry (RSA).

    PubMed

    Dahl, Jon; Figved, Wender; Snorrason, Finnur; Nordsletten, Lars; Röhrl, Stephan M

    2013-03-01

    Radiostereometry (RSA) is considered to be the most precise and accurate method for wear-measurements in total hip replacement. Post-operative stereoradiographs has so far been necessary for wear measurement. Hence, the use of RSA has been limited to studies planned for RSA measurements. We compared a new RSA method for wear measurements that does not require previous radiographs with conventional RSA. Instead of comparing present stereoradiographs with post-operative ones, we developed a method for calculating the post-operative position of the center of the femoral head on the present examination and using this as the index measurement. We compared this alternative method to conventional RSA in 27 hips in an ongoing RSA study. We found a high degree of agreement between the methods for both mean proximal (1.19 mm vs. 1.14 mm) and mean 3D wear (1.52 mm vs. 1.44 mm) after 10 years. Intraclass correlation coefficients (ICC) were 0.958 and 0.955, respectively (p<0.001 for both ICCs). The results were also within the limits of agreement when plotted subject-by-subject in a Bland-Altman plot. Our alternative method for wear measurements with RSA offers comparable results to conventional RSA measurements. It allows precise wear measurements without previous radiological examinations. Copyright © 2012 Orthopaedic Research Society.

  8. Shock capturing finite difference algorithms for supersonic flow past fighter and missile type configurations

    NASA Technical Reports Server (NTRS)

    Osher, S.

    1984-01-01

    The construction of a reliable, shock capturing finite difference method to solve the Euler equations for inviscid, supersonic flow past fighter and missile type configurations is highly desirable. The numerical method must have a firm theoretical foundation and must be robust and efficient. It should be able to treat subsonic pockets in a predominantly supersonic flow. The method must also be easily applicable to the complex topologies of the aerodynamic configuration under consideration. The ongoing approach to this task is described and for steady supersonic flows is presented. This scheme is the basic numerical method. Results of work obtained during previous years are presented.

  9. Zone plate method for electronic holographic display using resolution redistribution technique.

    PubMed

    Takaki, Yasuhiro; Nakamura, Junya

    2011-07-18

    The resolution redistribution (RR) technique can increase the horizontal viewing-zone angle and screen size of electronic holographic display. The present study developed a zone plate method that would reduce hologram calculation time for the RR technique. This method enables calculation of an image displayed on a spatial light modulator by performing additions of the zone plates, while the previous calculation method required performing the Fourier transform twice. The derivation and modeling of the zone plate are shown. In addition, the look-up table approach was introduced for further reduction in computation time. Experimental verification using a holographic display module based on the RR technique is presented.

  10. A transient laboratory method for determining the hydraulic properties of 'tight' rocks-I. Theory

    USGS Publications Warehouse

    Hsieh, P.A.; Tracy, J.V.; Neuzil, C.E.; Bredehoeft, J.D.; Silliman, Stephen E.

    1981-01-01

    Transient pulse testing has been employed increasingly in the laboratory to measure the hydraulic properties of rock samples with low permeability. Several investigators have proposed a mathematical model in terms of an initial-boundary value problem to describe fluid flow in a transient pulse test. However, the solution of this problem has not been available. In analyzing data from the transient pulse test, previous investigators have either employed analytical solutions that are derived with the use of additional, restrictive assumptions, or have resorted to numerical methods. In Part I of this paper, a general, analytical solution for the transient pulse test is presented. This solution is graphically illustrated by plots of dimensionless variables for several cases of interest. The solution is shown to contain, as limiting cases, the more restrictive analytical solutions that the previous investigators have derived. A method of computing both the permeability and specific storage of the test sample from experimental data will be presented in Part II. ?? 1981.

  11. A New Principle of Sound Frequency Analysis

    NASA Technical Reports Server (NTRS)

    Theodorsen, Theodore

    1932-01-01

    In connection with the study of aircraft and propeller noises, the National Advisory Committee for Aeronautics has developed an instrument for sound-frequency analysis which differs fundamentally from previous types, and which, owing to its simplicity of principle, construction, and operation, has proved to be of value in this investigation. The method is based on the well-known fact that the Ohmic loss in an electrical resistance is equal to the sum of the losses of the harmonic components of a complex wave, except for the case in which any two components approach or attain vectorial identity, in which case the Ohmic loss is increased by a definite amount. The principle of frequency analysis has been presented mathematically and a number of distinct advantages relative to previous methods have been pointed out. An automatic recording instrument embodying this principle is described in detail. It employs a beat-frequency oscillator as a source of variable frequency. A large number of experiments have verified the predicted superiority of the method. A number of representative records are presented.

  12. Mathematic models for a ray tracing method and its applications in wireless optical communications.

    PubMed

    Zhang, Minglun; Zhang, Yangan; Yuan, Xueguang; Zhang, Jinnan

    2010-08-16

    This paper presents a new ray tracing method, which contains a whole set of mathematic models, and its validity is verified by simulations. In addition, both theoretical analysis and simulation results show that the computational complexity of the method is much lower than that of previous ones. Therefore, the method can be used to rapidly calculate the impulse response of wireless optical channels for complicated systems.

  13. Percolation in real multiplex networks

    NASA Astrophysics Data System (ADS)

    Bianconi, Ginestra; Radicchi, Filippo

    2016-12-01

    We present an exact mathematical framework able to describe site-percolation transitions in real multiplex networks. Specifically, we consider the average percolation diagram valid over an infinite number of random configurations where nodes are present in the system with given probability. The approach relies on the locally treelike ansatz, so that it is expected to accurately reproduce the true percolation diagram of sparse multiplex networks with negligible number of short loops. The performance of our theory is tested in social, biological, and transportation multiplex graphs. When compared against previously introduced methods, we observe improvements in the prediction of the percolation diagrams in all networks analyzed. Results from our method confirm previous claims about the robustness of real multiplex networks, in the sense that the average connectedness of the system does not exhibit any significant abrupt change as its individual components are randomly destroyed.

  14. A gas-kinetic BGK scheme for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Xu, Kun

    2000-01-01

    This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.

  15. Previous Open Rotor Research in the US

    NASA Technical Reports Server (NTRS)

    VanZante, Dale

    2011-01-01

    Previous Open Rotor noise experience in the United States, current Open Rotor noise research in the United States and current NASA prediction methods activities were presented at a European Union (EU) X-Noise seminar. The invited attendees from EU industries, research establishments and universities discussed prospects for reducing Open Rotor noise and reviewed all technology programs, past and present, dedicated to Open Rotor engine concepts. This workshop was particularly timely because the Committee on Aviation Environmental Protection (CAEP) plans to involve Independent Experts in late 2011 in assessing the noise of future low-carbon technologies including the open rotor.

  16. Using Behavioural Skills Training to Treat Aggression in Adults with Mild Intellectual Disability in a Forensic Setting

    ERIC Educational Resources Information Center

    Travis, Robert W.; Sturmey, Peter

    2013-01-01

    Background: Previous studies of anger management in people with intellectual disability failed to control for the effects of the number of provocative stimuli presented and lacked direct measures of behaviour and treatment integrity data. Methods: This experiment systematically assessed and presented discriminative stimuli for aggressive…

  17. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.

  18. A Simple and Efficient Computational Approach to Chafed Cable Time-Domain Reflectometry Signature Prediction

    NASA Technical Reports Server (NTRS)

    Kowalski, Marc Edward

    2009-01-01

    A method for the prediction of time-domain signatures of chafed coaxial cables is presented. The method is quasi-static in nature, and is thus efficient enough to be included in inference and inversion routines. Unlike previous models proposed, no restriction on the geometry or size of the chafe is required in the present approach. The model is validated and its speed is illustrated via comparison to simulations from a commercial, three-dimensional electromagnetic simulator.

  19. Dosimetry and microdosimetry using COTS ICs: A comparative study

    NASA Technical Reports Server (NTRS)

    Scheick, L.; Swift, G.; Guertin, S.; Roth, D.; McNulty, P.; Nguyen, D.

    2002-01-01

    A new method using an array of MOS transistors formeasuring dose absorbed from ionizing radiation is compared to previous dosimetric methods., The accuracy and precision of dosimetry based on COTS SRAMs, DRAMs, and WPROMs are compared and contrasted. Applications of these devices in various space missions will be discussed. TID results are presented for this summary and microdosimetricresults will be added to the full paper. Finally, an analysis of the optimal condition for a digital dosimeter will be presented.

  20. Reference Values for Spirometry Derived Using Lambda, Mu, Sigma (LMS) Method in Korean Adults: in Comparison with Previous References.

    PubMed

    Jo, Bum Seak; Myong, Jun Pyo; Rhee, Chin Kook; Yoon, Hyoung Kyu; Koo, Jung Wan; Kim, Hyoung Ryoul

    2018-01-15

    The present study aimed to update the prediction equations for spirometry and their lower limits of normal (LLN) by using the lambda, mu, sigma (LMS) method and to compare the outcomes with the values of previous spirometric reference equations. Spirometric data of 10,249 healthy non-smokers (8,776 females) were extracted from the fourth and fifth versions of the Korea National Health and Nutrition Examination Survey (KNHANES IV, 2007-2009; V, 2010-2012). Reference equations were derived using the LMS method which allows modeling skewness (lambda [L]), mean (mu [M]), and coefficient of variation (sigma [S]). The outcome equations were compared with previous reference values. Prediction equations were presented in the following form: predicted value = e{a + b × ln(height) + c × ln(age) + M - spline}. The new predicted values for spirometry and their LLN derived using the LMS method were shown to more accurately reflect transitions in pulmonary function in young adults than previous prediction equations derived using conventional regression analysis in 2013. There were partial discrepancies between the new reference values and the reference values from the Global Lung Function Initiative in 2012. The results should be interpreted with caution for young adults and elderly males, particularly in terms of the LLN for forced expiratory volume in one second/forced vital capacity in elderly males. Serial spirometry follow-up, together with correlations with other clinical findings, should be emphasized in evaluating the pulmonary function of individuals. Future studies are needed to improve the accuracy of reference data and to develop continuous reference values for spirometry across all ages. © 2018 The Korean Academy of Medical Sciences.

  1. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  2. Combined gradient projection/single component artificial force induced reaction (GP/SC-AFIR) method for an efficient search of minimum energy conical intersection (MECI) geometries

    NASA Astrophysics Data System (ADS)

    Harabuchi, Yu; Taketsugu, Tetsuya; Maeda, Satoshi

    2017-04-01

    We report a new approach to search for structures of minimum energy conical intersection (MECIs) automatically. Gradient projection (GP) method and single component artificial force induced reaction (SC-AFIR) method were combined in the present approach. As case studies, MECIs of benzene and naphthalene between their ground and first excited singlet electronic states (S0/S1-MECIs) were explored. All S0/S1-MECIs reported previously were obtained automatically. Furthermore, the number of force calculations was reduced compared to the one required in the previous search. Improved convergence in a step in which various geometrical displacements are induced by SC-AFIR would contribute to the cost reduction.

  3. 2D photonic crystal complete band gap search using a cyclic cellular automaton refination

    NASA Astrophysics Data System (ADS)

    González-García, R.; Castañón, G.; Hernández-Figueroa, H. E.

    2014-11-01

    We present a refination method based on a cyclic cellular automaton (CCA) that simulates a crystallization-like process, aided with a heuristic evolutionary method called differential evolution (DE) used to perform an ordered search of full photonic band gaps (FPBGs) in a 2D photonic crystal (PC). The solution is proposed as a combinatorial optimization of the elements in a binary array. These elements represent the existence or absence of a dielectric material surrounded by air, thus representing a general geometry whose search space is defined by the number of elements in such array. A block-iterative frequency-domain method was used to compute the FPBGs on a PC, when present. DE has proved to be useful in combinatorial problems and we also present an implementation feature that takes advantage of the periodic nature of PCs to enhance the convergence of this algorithm. Finally, we used this methodology to find a PC structure with a 19% bandgap-to-midgap ratio without requiring previous information of suboptimal configurations and we made a statistical study of how it is affected by disorder in the borders of the structure compared with a previous work that uses a genetic algorithm.

  4. Presentation a New Model to Measure National Power of the Countries

    NASA Astrophysics Data System (ADS)

    Hafeznia, Mohammad Reza; Hadi Zarghani, Seyed; Ahmadipor, Zahra; Roknoddin Eftekhari, Abdelreza

    In this research, based on the assessment of previous models for the evaluation of national power, a new model is presented to measure national power; it is much better than previous models. Paying attention to all the aspects of national power (economical, social, cultural, political, military, astro-space, territorial, scientific and technological and transnational), paying attention to the usage of 87 factors, stressing the usage of new and strategically compatible variables to the current time are some of the benefits of this model. Also using the Delphi method and referring to the opinions of experts about determining the role and importance of variables affecting national power, the option of drawing out the global power structure are some the other advantages that this model has compared to previous ones.

  5. Digital multishaker modal testing

    NASA Technical Reports Server (NTRS)

    Blair, M.; Craig, R. R., Jr.

    1983-01-01

    A review of several modal testing techniques is made, along with brief discussions of their advantages and limitations. A new technique is presented which overcomes many of the previous limitations. Several simulated experiments are included to verify the validity and accuracy of the new method. Conclusions are drawn from the simulation studies and recommendations for further work are presented. The complete computer code configured for the simulation study is presented.

  6. Bulk Enthalpy Calculations in the Arc Jet Facility at NASA ARC

    NASA Technical Reports Server (NTRS)

    Thompson, Corinna S.; Prabhu, Dinesh; Terrazas-Salinas, Imelda; Mach, Jeffrey J.

    2011-01-01

    The Arc Jet Facilities at NASA Ames Research Center generate test streams with enthalpies ranging from 5 MJ/kg to 25 MJ/kg. The present work describes a rigorous method, based on equilibrium thermodynamics, for calculating the bulk enthalpy of the flow produced in two of these facilities. The motivation for this work is to determine a dimensionally-correct formula for calculating the bulk enthalpy that is at least as accurate as the conventional formulas that are currently used. Unlike previous methods, the new method accounts for the amount of argon that is present in the flow. Comparisons are made with bulk enthalpies computed from an energy balance method. An analysis of primary facility operating parameters and their associated uncertainties is presented in order to further validate the enthalpy calculations reported herein.

  7. A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Fan, Liang-Shih

    2014-07-01

    A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge-Kutta schemes in the coupled fluid-particle interaction. The major challenge to implement high-order Runge-Kutta schemes in the LBM is that the flow information such as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid-particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge-Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and -0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered. This finding may lead to more comprehensive studies of the effect of the particle rotation on fluid-solid drag laws. It is also demonstrated that, when the third-order or the fourth-order Runge-Kutta scheme is used, the numerical stability of the present IB-LBM is better than that of all methods in the literature, including the previous IB-LBMs and also the methods with the combination of the IBM and the traditional incompressible Navier-Stokes solver.

  8. Automated cloud screening of AVHRR imagery using split-and-merge clustering

    NASA Technical Reports Server (NTRS)

    Gallaudet, Timothy C.; Simpson, James J.

    1991-01-01

    Previous methods to segment clouds from ocean in AVHRR imagery have shown varying degrees of success, with nighttime approaches being the most limited. An improved method of automatic image segmentation, the principal component transformation split-and-merge clustering (PCTSMC) algorithm, is presented and applied to cloud screening of both nighttime and daytime AVHRR data. The method combines spectral differencing, the principal component transformation, and split-and-merge clustering to sample objectively the natural classes in the data. This segmentation method is then augmented by supervised classification techniques to screen clouds from the imagery. Comparisons with other nighttime methods demonstrate its improved capability in this application. The sensitivity of the method to clustering parameters is presented; the results show that the method is insensitive to the split-and-merge thresholds.

  9. An extended Lagrangian method

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    1993-01-01

    A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. The present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multidimensional discontinuities with a high level of accuracy, similar to that found in 1D problems.

  10. Structural characterization of product ions of regulated veterinary drugs by electrospray ionization and quadrupole time-of-flight mass spectrometry (part 3) Anthelmintics, thyreostats, and flukicides

    USDA-ARS?s Scientific Manuscript database

    RATIONALE: Previously we have reported a liquid chromatography tandem mass spectrometry method for the identification and quantification of regulated veterinary drugs. The methods used three selected transition ions but most of these ions lacked structural characterization. The work presented here ...

  11. Use of a New Set of Linguistic Features to Improve Automatic Assessment of Text Readability

    ERIC Educational Resources Information Center

    Yoshimi, Takehiko; Kotani, Katsunori; Isahara, Hitoshi

    2012-01-01

    The present paper proposes and evaluates a readability assessment method designed for Japanese learners of EFL (English as a foreign language). The proposed readability assessment method is constructed by a regression algorithm using a new set of linguistic features that were employed separately in previous studies. The results showed that the…

  12. The exact solution of the monoenergetic transport equation for critical cylinders

    NASA Technical Reports Server (NTRS)

    Westfall, R. M.; Metcalf, D. R.

    1972-01-01

    An analytic solution for the critical, monoenergetic, bare, infinite cylinder is presented. The solution is obtained by modifying a previous development based on a neutron density transform and Case's singular eigenfunction method. Numerical results for critical radii and the neutron density as a function of position are included and compared with the results of other methods.

  13. An Efficient Method for the Retrieval of Objects by Topological Relations in Spatial Database Systems.

    ERIC Educational Resources Information Center

    Lin, P. L.; Tan, W. H.

    2003-01-01

    Presents a new method to improve the performance of query processing in a spatial database. Experiments demonstrated that performance of database systems can be improved because both the number of objects accessed and number of objects requiring detailed inspection are much less than those in the previous approach. (AEF)

  14. Boundary conditions for simulating large SAW devices using ANSYS.

    PubMed

    Peng, Dasong; Yu, Fengqi; Hu, Jian; Li, Peng

    2010-08-01

    In this report, we propose improved substrate left and right boundary conditions for simulating SAW devices using ANSYS. Compared with the previous methods, the proposed method can greatly reduce computation time. Furthermore, the longer the distance from the first reflector to the last one, the more computation time can be reduced. To verify the proposed method, a design example is presented with device center frequency 971.14 MHz.

  15. MEMS/ECD Method for Making Bi(2-x)Sb(x)Te3 Thermoelectric Devices

    NASA Technical Reports Server (NTRS)

    Lim, James; Huang, Chen-Kuo; Ryan, Margaret; Snyder, G. Jeffrey; Herman, Jennifer; Fleurial, Jean-Pierre

    2008-01-01

    A method of fabricating Bi(2-x)Sb(x)Te3-based thermoelectric microdevices involves a combination of (1) techniques used previously in the fabrication of integrated circuits and of microelectromechanical systems (MEMS) and (2) a relatively inexpensive MEMS-oriented electrochemical-deposition (ECD) technique. The present method overcomes the limitations of prior MEMS fabrication techniques and makes it possible to satisfy requirements.

  16. Rapid screening of tannase producing microbes by using natural tannin

    PubMed Central

    Jana, Arijit; Maity, Chiranjit; Halder, Suman Kumar; Pati, Bikas Ranjan; Mondal, Keshab Chandra; Mohapatra, Pradeep Kumar Das

    2012-01-01

    Use of natural tannin in the screening of tannase producing microbes is really promising. The present work describes about the possibility and integrity of the newly formulated method over the previously reported methods. Tannin isolated from Terminalia belerica Roxb. (Bahera) was used to differentiate between tanninolytic and nontanninolytic microbes. The method is simple, sensitive and superior for the rapid screening and isolation of tannase-producing microbes. PMID:24031931

  17. Rapid screening of tannase producing microbes by using natural tannin.

    PubMed

    Jana, Arijit; Maity, Chiranjit; Halder, Suman Kumar; Pati, Bikas Ranjan; Mondal, Keshab Chandra; Mohapatra, Pradeep Kumar Das

    2012-07-01

    Use of natural tannin in the screening of tannase producing microbes is really promising. The present work describes about the possibility and integrity of the newly formulated method over the previously reported methods. Tannin isolated from Terminalia belerica Roxb. (Bahera) was used to differentiate between tanninolytic and nontanninolytic microbes. The method is simple, sensitive and superior for the rapid screening and isolation of tannase-producing microbes.

  18. FLARE STARS—A FAVORABLE OBJECT FOR STUDYING MECHANISMS OF NONTHERMAL ASTROPHYSICAL PHENOMENA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oks, E.; Gershberg, R. E.

    2016-03-01

    We present a spectroscopic method for diagnosing a low-frequency electrostatic plasma turbulence (LEPT) in plasmas of flare stars. This method had been previously developed by one of us and successfully applied to diagnosing the LEPT in solar flares. In distinction to our previous applications of the method, here we use the latest advances in the theory of the Stark broadening of hydrogen spectral lines. By analyzing observed emission Balmer lines, we show that it is very likely that the LEPT was developed in several flares of AD Leo, as well as in one flare of EV Lac. We found themore » LEPT (though of different field strengths) both in the explosive/impulsive phase and at the phase of the maximum, as well as at the gradual phase of the stellar flares. While for solar flares our method allows diagnosing the LEPT only in the most powerful flares, for the flare stars it seems that the method allows revealing the LEPT practically in every flare. It should be important to obtain new and better spectrograms of stellar flares, allowing their analysis by the method outlined in the present paper. This can be the most favorable way to the detailed understanding of the nature of nonthermal astrophysical phenomena.« less

  19. The Schwinger Variational Method

    NASA Technical Reports Server (NTRS)

    Huo, Winifred M.

    1995-01-01

    Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. For collisional problems they can be grouped into two types: those based on the Schroedinger equation and those based on the Lippmann-Schwinger equation. The application of the Schwinger variational (SV) method to e-molecule collisions and photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions.

  20. Evolving bipartite authentication graph partitions

    DOE PAGES

    Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.

    2017-01-16

    As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less

  1. Evolving bipartite authentication graph partitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.

    As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less

  2. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Shu, C.; Tan, D.

    2018-05-01

    An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.

  4. Scene-based nonuniformity correction with reduced ghosting using a gated LMS algorithm.

    PubMed

    Hardie, Russell C; Baxley, Frank; Brys, Brandon; Hytla, Patrick

    2009-08-17

    In this paper, we present a scene-based nouniformity correction (NUC) method using a modified adaptive least mean square (LMS) algorithm with a novel gating operation on the updates. The gating is designed to significantly reduce ghosting artifacts produced by many scene-based NUC algorithms by halting updates when temporal variation is lacking. We define the algorithm and present a number of experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published methods including other LMS and constant statistics based methods. The experimental results include simulated imagery and a real infrared image sequence. We show that the proposed method significantly reduces ghosting artifacts, but has a slightly longer convergence time. (c) 2009 Optical Society of America

  5. A proof of the DBRF-MEGN method, an algorithm for deducing minimum equivalent gene networks

    PubMed Central

    2011-01-01

    Background We previously developed the DBRF-MEGN (difference-based regulation finding-minimum equivalent gene network) method, which deduces the most parsimonious signed directed graphs (SDGs) consistent with expression profiles of single-gene deletion mutants. However, until the present study, we have not presented the details of the method's algorithm or a proof of the algorithm. Results We describe in detail the algorithm of the DBRF-MEGN method and prove that the algorithm deduces all of the exact solutions of the most parsimonious SDGs consistent with expression profiles of gene deletion mutants. Conclusions The DBRF-MEGN method provides all of the exact solutions of the most parsimonious SDGs consistent with expression profiles of gene deletion mutants. PMID:21699737

  6. A Mapmark method of standard setting as implemented for the National Assessment Governing Board.

    PubMed

    Schulz, E Matthew; Mitzel, Howard C

    2011-01-01

    This article describes a Mapmark standard setting procedure, developed under contract with the National Assessment Governing Board (NAGB). The procedure enhances the bookmark method with spatially representative item maps, holistic feedback, and an emphasis on independent judgment. A rationale for these enhancements, and the bookmark method, is presented, followed by a detailed description of the materials and procedures used in a meeting to set standards for the 2005 National Assessment of Educational Progress (NAEP) in Grade 12 mathematics. The use of difficulty-ordered content domains to provide holistic feedback is a particularly novel feature of the method. Process evaluation results comparing Mapmark to Anghoff-based methods previously used for NAEP standard setting are also presented.

  7. Further optimization of culture method for rat keratinocytes: titration of glucose and sodium chloride.

    PubMed

    Oku, H; Yamashita, M; Iwasaki, H; Chinen, I

    1999-02-01

    The present study further improved the serum-free method of culturing rat keratinocytes. To obtain the best growth of rat keratinocytes, we modified our previous serum-free medium (MCDB153 based medium), particularly the amounts of glucose and sodium chloride (NaCl). Titration experiments showed the optimal concentration to be 0.8 mM for glucose and 100 mM for NaCl. This modification eliminated the requirement for albumin, which had been essential for colony formation when our previous medium was used. Titration of glucose and NaCl, followed by adjustment of essential amino acids and growth factors, produced a new formulation. More satisfactory and better growth was achieved with the new medium than with the previous medium. Accumulation of monoalkyldiacylglycerol (MADAG) was consistently noted in this study, representing the unusual lipid profile. A tendency toward normalization was, however, noted with the neutral lipid profile of keratinocytes cultivated in the new medium: lower production of MADAG was obtained with the new formulation, rather than the previous one.

  8. Direct application of Padé approximant for solving nonlinear differential equations.

    PubMed

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario

    2014-01-01

    This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.

  9. Method of production of pure hydrogen near room temperature from aluminum-based hydride materials

    DOEpatents

    Pecharsky, Vitalij K.; Balema, Viktor P.

    2004-08-10

    The present invention provides a cost-effective method of producing pure hydrogen gas from hydride-based solid materials. The hydride-based solid material is mechanically processed in the presence of a catalyst to obtain pure gaseous hydrogen. Unlike previous methods, hydrogen may be obtained from the solid material without heating, and without the addition of a solvent during processing. The described method of hydrogen production is useful for energy conversion and production technologies that consume pure gaseous hydrogen as a fuel.

  10. Some practical observations on the predictor jump method for solving the Laplace equation

    NASA Astrophysics Data System (ADS)

    Duque-Carrillo, J. F.; Vega-Fernández, J. M.; Peña-Bernal, J. J.; Rossell-Bueno, M. A.

    1986-01-01

    The best conditions for the application of the predictor jump (PJ) method in the solution of the Laplace equation are discussed and some practical considerations for applying this new iterative technique are presented. The PJ method was remarked on in a previous article entitled ``A new way for solving Laplace's problem (the predictor jump method)'' [J. M. Vega-Fernández, J. F. Duque-Carrillo, and J. J. Peña-Bernal, J. Math. Phys. 26, 416 (1985)].

  11. Generating human-like movements on an anthropomorphic robot using an interior point method

    NASA Astrophysics Data System (ADS)

    Costa e Silva, E.; Araújo, J. P.; Machado, D.; Costa, M. F.; Erlhagen, W.; Bicho, E.

    2013-10-01

    In previous work we have presented a model for generating human-like arm and hand movements on an anthropomorphic robot involved in human-robot collaboration tasks. This model was inspired by the Posture-Based Motion-Planning Model of human movements. Numerical results and simulations for reach-to-grasp movements with two different grip types have been presented previously. In this paper we extend our model in order to address the generation of more complex movement sequences which are challenged by scenarios cluttered with obstacles. The numerical results were obtained using the IPOPT solver, which was integrated in our MATLAB simulator of an anthropomorphic robot.

  12. Term Cancellations in Computing Floating-Point Gröbner Bases

    NASA Astrophysics Data System (ADS)

    Sasaki, Tateaki; Kako, Fujio

    We discuss the term cancellation which makes the floating-point Gröbner basis computation unstable, and show that error accumulation is never negligible in our previous method. Then, we present a new method, which removes accumulated errors as far as possible by reducing matrices constructed from coefficient vectors by the Gaussian elimination. The method manifests amounts of term cancellations caused by the existence of approximate linearly dependent relations among input polynomials.

  13. Simultaneous separation and quantitation of amino acids and polyamines of forest tree tissues and cell cultures within a single high-performance liquid chromatography run using dansyl derivatization

    Treesearch

    Rakesh Minocha; Stephanie Long

    2004-01-01

    The objective of the present study was to develop a rapid HPLC method for simultaneous separation and quantitation of dansylated amino acids and common polyamines in the same matrix for analyzing forest tree tissues and cell cultures. The major modifications incorporated into this method as compared to previously published HPLC methods for separation of only dansyl...

  14. Jet production in the CoLoRFulNNLO method: Event shapes in electron-positron collisions

    NASA Astrophysics Data System (ADS)

    Del Duca, Vittorio; Duhr, Claude; Kardos, Adam; Somogyi, Gábor; Szőr, Zoltán; Trócsányi, Zoltán; Tulipánt, Zoltán

    2016-10-01

    We present the CoLoRFulNNLO method to compute higher order radiative corrections to jet cross sections in perturbative QCD. We apply our method to the computation of event shape observables in electron-positron collisions at NNLO accuracy and validate our code by comparing our predictions to previous results in the literature. We also calculate for the first time jet cone energy fraction at NNLO.

  15. Efficient path-based computations on pedigree graphs with compact encodings

    PubMed Central

    2012-01-01

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898

  16. A Multi-Faceted Analysis of a New Therapeutic Model of Linking Appraisals to Affective Experiences.

    ERIC Educational Resources Information Center

    McCarthy, Christopher; And Others

    I. Roseman, M. Spindel, and P. Jose (1990) had previously demonstrated that specific appraisals of events led to discrete emotional responses, but this model has not been widely tested by other research teams using alternative research methods. The present study utilized four qualitative research methods, taught by Patti Lather at the 1994…

  17. Internal wave energy flux from density perturbations in nonlinear stratifications

    NASA Astrophysics Data System (ADS)

    Lee, Frank M.; Allshouse, Michael R.; Swinney, Harry L.; Morrison, P. J.

    2017-11-01

    Tidal flow over the topography at the bottom of the ocean, whose density varies with depth, generates internal gravity waves that have a significant impact on the energy budget of the ocean. Thus, understanding the energy flux (J = p v) is important, but it is difficult to measure simultaneously the pressure and velocity perturbation fields, p and v . In a previous work, a Green's-function-based method was developed to calculate the instantaneous p, v , and thus J , given a density perturbation field for a constant buoyancy frequency N. Here we extend the previous analytic Green's function work to include nonuniform N profiles, namely the tanh-shaped and linear cases, because background density stratifications that occur in the ocean and some experiments are nonlinear. In addition, we present a finite-difference method for the general case where N has an arbitrary profile. Each method is validated against numerical simulations. The methods we present can be applied to measured density perturbation data by using our MATLAB graphical user interface EnergyFlux. PJM was supported by the U.S. Department of Energy Contract DE-FG05-80ET-53088. HLS and MRA were supported by ONR Grant No. N000141110701.

  18. Measuring global monopole velocities, one by one

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Eiguren, Asier; Urrestilla, Jon; Achúcarro, Ana, E-mail: asier.lopez@ehu.eus, E-mail: jon.urrestilla@ehu.eus, E-mail: achucar@lorentz.leidenuniv.nl

    We present an estimation of the average velocity of a network of global monopoles in a cosmological setting using large numerical simulations. In order to obtain the value of the velocity, we improve some already known methods, and present a new one. This new method estimates individual global monopole velocities in a network, by means of detecting each monopole position in the lattice and following the path described by each one of them. Using our new estimate we can settle an open question previously posed in the literature: velocity-dependent one-scale (VOS) models for global monopoles predict two branches of scalingmore » solutions, one with monopoles moving at subluminal speeds and one with monopoles moving at luminal speeds. Previous attempts to estimate monopole velocities had large uncertainties and were not able to settle that question. Our simulations find no evidence of a luminal branch. We also estimate the values of the parameters of the VOS model. With our new method we can also study the microphysics of the complicated dynamics of individual monopoles. Finally we use our large simulation volume to compare the results from the different estimator methods, as well as to asses the validity of the numerical approximations made.« less

  19. An Alternate Set of Basis Functions for the Electromagnetic Solution of Arbitrarily-Shaped, Three-Dimensional, Closed, Conducting Bodies Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.

    2008-01-01

    In this work, we present an alternate set of basis functions, each defined over a pair of planar triangular patches, for the method of moments solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped, closed, conducting surfaces. The present basis functions are point-wise orthogonal to the pulse basis functions previously defined. The prime motivation to develop the present set of basis functions is to utilize them for the electromagnetic solution of dielectric bodies using a surface integral equation formulation which involves both electric and magnetic cur- rents. However, in the present work, only the conducting body solution is presented and compared with other data.

  20. Model-Mapped RPA for Determining the Effective Coulomb Interaction

    NASA Astrophysics Data System (ADS)

    Sakakibara, Hirofumi; Jang, Seung Woo; Kino, Hiori; Han, Myung Joon; Kuroki, Kazuhiko; Kotani, Takao

    2017-04-01

    We present a new method to obtain a model Hamiltonian from first-principles calculations. The effective interaction contained in the model is determined on the basis of random phase approximation (RPA). In contrast to previous methods such as projected RPA and constrained RPA (cRPA), the new method named "model-mapped RPA" takes into account the long-range part of the polarization effect to determine the effective interaction in the model. After discussing the problems of cRPA, we present the formulation of the model-mapped RPA, together with a numerical test for the single-band Hubbard model of HgBa2CuO4.

  1. Back to the Agora: Marketing Foreign Admissions

    ERIC Educational Resources Information Center

    Armenio, Joseph A.

    1978-01-01

    In the face of declining, full-time enrollments, colleges and universities are investigating markets other than those previously cultivated. The international student market presents a viable resource which, with integrated marketing methods, can offset the declining domestic student population. (Author)

  2. MBA: Is the Traditional Model Doomed?

    ERIC Educational Resources Information Center

    Lataif, Louis E.; And Others

    1992-01-01

    Presents 13 commentaries on a previously published case study about the value of a Master's of Business Administration to employers today. Critiques center on the case study method, theory-practice gap, and value of practical experience and include international perspectives. (SK)

  3. Estimation of the Operating Characteristics when the Test Information of the Old Test is not Constant II: Simple Sum Procedure of the Conditional P.D.F. Approach/Normal Approach Method Using Three Subtests of the Old Test. Research Report 80-4.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    The rationale behind the method of estimating the operating characteristics of discrete item responses when the test information of the Old Test is not constant was presented previously. In the present study, two subtests of the Old Test, i.e. Subtests 1, and 2, each of which has a different non-constant test information function, are used in…

  4. Distributed synchronization of networked drive-response systems: A nonlinear fixed-time protocol.

    PubMed

    Zhao, Wen; Liu, Gang; Ma, Xi; He, Bing; Dong, Yunfeng

    2017-11-01

    The distributed synchronization of networked drive-response systems is investigated in this paper. A novel nonlinear protocol is proposed to ensure that the tracking errors converge to zeros in a fixed-time. By comparison with previous synchronization methods, the present method considers more practical conditions and the synchronization time is not dependent of arbitrary initial conditions but can be offline pre-assign according to the task assignment. Finally, the feasibility and validity of the presented protocol have been illustrated by a numerical simulation. Copyright © 2017. Published by Elsevier Ltd.

  5. Optical method for continuous monitoring of dust deposition in mine's entry / Optyczna metoda ciągłego pomiaru intensywności osiadania pyłu węglowego w wyrobisku górniczym

    NASA Astrophysics Data System (ADS)

    2012-12-01

    The paper presents factors determining dust explosion hazards occurring in underground hard coal mines. The authors described the mechanism of transport and deposition of dust in mines entries and previous research on this topic. The paper presents a method of determination of depositing dust distribution during mining and presents the way to use it to assess coal dust explosion risk. The presented method of calculating the intensity of coal dust deposition is based on continuous monitoring of coal dust concentrations with use of optical sensors. Mathematical model of the distribution of the average coal dust concentration was created. Presented method allows to calculate the intensity of coal dust deposition in a continuous manner. Additionally, the authors presented the PŁ-2 stationary optical dust sampler, used in the study, connected to the monitoring system in the mine. The article features the results of studies conducted in the return air courses of the active longwalls, and the results of calculations of dust deposition intensity carried out with the use of the presented method.

  6. Evaluation of three methods for the concentration of poliovirus from oysters.

    PubMed

    Bouchriti, N; Goyal, S M

    1992-10-01

    Three methods for the concentration of poliovirus from oyster homogenates were compared. The adsorption-elution-precipitation method gave the lowest average virus recovery (24.1%), while the beef extract elution-acid precipitation method and the non-fat dry milk elution-acid precipitation methods gave recoveries of 47.2% and 39.6%, respectively. Although the overall recovery rates with these methods were lower than those reported in previous studies, recoveries of 40-47% obtained with the elution-precipitation methods used in the present study are considered to be above average in terms of recovery efficiency.

  7. Improved analytical methods for microarray-based genome-composition analysis

    PubMed Central

    Kim, Charles C; Joyce, Elizabeth A; Chan, Kaman; Falkow, Stanley

    2002-01-01

    Background Whereas genome sequencing has given us high-resolution pictures of many different species of bacteria, microarrays provide a means of obtaining information on genome composition for many strains of a given species. Genome-composition analysis using microarrays, or 'genomotyping', can be used to categorize genes into 'present' and 'divergent' categories based on the level of hybridization signal. This typically involves selecting a signal value that is used as a cutoff to discriminate present (high signal) and divergent (low signal) genes. Current methodology uses empirical determination of cutoffs for classification into these categories, but this methodology is subject to several problems that can result in the misclassification of many genes. Results We describe a method that depends on the shape of the signal-ratio distribution and does not require empirical determination of a cutoff. Moreover, the cutoff is determined on an array-to-array basis, accounting for variation in strain composition and hybridization quality. The algorithm also provides an estimate of the probability that any given gene is present, which provides a measure of confidence in the categorical assignments. Conclusions Many genes previously classified as present using static methods are in fact divergent on the basis of microarray signal; this is corrected by our algorithm. We have reassigned hundreds of genes from previous genomotyping studies of Helicobacter pylori and Campylobacter jejuni strains, and expect that the algorithm should be widely applicable to genomotyping data. PMID:12429064

  8. The effects of list-method directed forgetting on recognition memory.

    PubMed

    Benjamin, Aaron S

    2006-10-01

    It is an almost universally accepted claim that the list-method procedure of inducing directed forgetting does not affect recognition. However, previous studies have omitted a critical comparison in reaching this conclusion. This article reports evidence that recognition of material learned after cue presentation is superior for conditions in which the material that preceded cue presentation was designated as to-be-forgotten. Because the absence of an effect of directed-forgetting instructions on recognition is the linchpin of the theoretical claim that retrieval inhibition and not selective rehearsal underlies that effect, the present results call into question the need to postulate a role for inhibition in directed forgetting.

  9. Improvement of Automated Identification of the Heart Wall in Echocardiography by Suppressing Clutter Component

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi

    2013-07-01

    For the facilitation of analysis and elimination of the operator dependence in estimating the myocardial function in echocardiography, we have previously developed a method for automated identification of the heart wall. However, there are misclassified regions because the magnitude-squared coherence (MSC) function of echo signals, which is one of the features in the previous method, is sensitively affected by the clutter components such as multiple reflection and off-axis echo from external tissue or the nearby myocardium. The objective of the present study is to improve the performance of automated identification of the heart wall. For this purpose, we proposed a method to suppress the effect of the clutter components on the MSC of echo signals by applying an adaptive moving target indicator (MTI) filter to echo signals. In vivo experimental results showed that the misclassified regions were significantly reduced using our proposed method in the longitudinal axis view of the heart.

  10. Health Research Participants’ Preferences for Receiving Research Results

    PubMed Central

    Long, Christopher R.; Stewart, M. Kathryn; Cunningham, Thomas V.; Warmack, T. Scott; McElfish, Pearl A.

    2017-01-01

    Background Participants in health research studies typically express interest in receiving results from the studies in which they participate. However, participants’ preferences and experiences related to receiving results are not well understood. In general, existing studies have had relatively small sample sizes and typically address specific and often sensitive issues within targeted populations. Methods The present study used an online survey to explore attitudes and experiences of registrants in ResearchMatch, a large database of past, present, and potential health research participants. Survey respondents provided information related to whether or not they received research results from studies in which they participated, the methods used to communicate results, their satisfaction with results, and when and how they would like to receive research results from future studies. 70,699 ResearchMatch registrants were notified of the study’s topic. Of the 5,207 registrants who requested full information about the study, 3,381 respondents completed the survey. Results Approximately 33% of respondents with previous health research participation reported receiving results. Approximately half of respondents with previous research participation reported no opportunity to request results. However, almost all respondents said researchers should always or sometimes offer results to participants. Respondents expressed particular interest in results related to their (or a loved one's) health, as well as information about studies’ purposes and any medical advances based on the results. In general, respondents’ most preferred dissemination methods for results were email and website postings. The least desirable dissemination methods for results included Twitter, conference calls, and text messages. Across all results, we compare the responses of respondents with and without previous research participation experience, and those who have worked in research organizations vs. those who have not. Compared to respondents who have previous participation experience, a greater proportion of respondents with no participation experience indicated that results should always be shared with participants. Likewise, respondents with no participation experience placed higher importance on the receipt of each type of results information included in the survey. Conclusions We present findings from a survey assessing attitudes and experiences of a broad sample of respondents that addresses gaps in knowledge related to participants’ preferences for receiving results. The study’s findings highlight the potential for inconsistency between respondents’ expressed preferences to receive specific types of results via specific methods and researchers’ unwillingness or inability to provide them. We present specific recommendations to shift the approach of new studies to investigate participants’ preferences for receiving research results. PMID:27562368

  11. A simple distillation method to extract bromine from natural water and salt samples for isotope analysis by multi-collector inductively coupled plasma mass spectrometry.

    PubMed

    Eggenkamp, H G M; Louvat, P

    2018-04-30

    In natural samples bromine is present in trace amounts, and measurement of stable Br isotopes necessitates its separation from the matrix. Most methods described previously need large samples or samples with high Br/Cl ratios. The use of metals as reagents, proposed in previous Br distillation methods, must be avoided for multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) analyses, because of risk of cross-contamination, since the instrument is also used to measure stable isotopes of metals. Dedicated to water and evaporite samples with low Br/Cl ratios, the proposed method is a simple distillation that separates bromide from chloride for isotopic analyses by MC-ICP-MS. It is based on the difference in oxidation potential between chloride and bromide in the presence of nitric acid. The sample is mixed with dilute (1:5) nitric acid in a distillation flask and heated over a candle flame for 10 min. The distillate (bromine) is trapped in an ammonia solution and reduced to bromide. Chloride is only distilled to a very small extent. The obtained solution can be measured directly by MC-ICP-MS for stable Br isotopes. The method was tested for a variety of volumes, ammonia concentrations, pH values and distillation times and compared with the classic ion-exchange chromatography method. The method more efficiently separates Br from Cl, so that samples with lower Br/Cl ratios can be analysed, with Br isotope data in agreement with those obtained by previous methods. Unlike other Br extraction methods based on oxidation, the distillation method presented here does not use any metallic ion for redox reactions that could contaminate the mass spectrometer. It is efficient in separating Br from samples with low Br/Cl ratios. The method ensures reproducible recovery yields and a long-term reproducibility of ±0.11‰ (1 standard deviation). The distillation method was successfully applied to samples with low Br/Cl ratios and low Br amounts (down to 20 μg). Copyright © 2018 John Wiley & Sons, Ltd.

  12. Dependence of future mortality changes on global CO2 concentrations: A review.

    PubMed

    Lee, Jae Young; Choi, Hayoung; Kim, Ho

    2018-05-01

    The heterogeneity among previous studies of future mortality projections due to climate change has often hindered comparisons and syntheses of resulting impacts. To address this challenge, the present study introduced a novel method to normalize the results from projection studies according to different baseline and projection periods and climate scenarios, thereby facilitating comparison and synthesis. This study reviewed the 15 previous studies involving projected climate change-related mortality under Representative Concentration Pathways. To synthesize their results, we first reviewed the important study design elements that affected the reported results in previous studies. Then, we normalized the reported results by CO 2 concentration in order to eliminate the effects of the baseline period, projection period, and climate scenario choices. For twenty-five locations worldwide, the normalized percentage changes in temperature-attributable mortality per 100 ppm increase in global CO 2 concentrations ranged between 41.9% and 330%, whereas those of total mortality ranged between 0.3% and 4.8%. The normalization methods presented in this work will guide future studies to provide their results in a normalized format and facilitate research synthesis to reinforce our understanding on the risk of climate change. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Data analysis of response interruption and redirection as a treatment for vocal stereotypy.

    PubMed

    Wunderlich, Kara L; Vollmer, Timothy R

    2015-12-01

    Vocal stereotypy, or repetitive, noncontextual vocalizations, is a problematic form of behavior exhibited by many individuals with autism spectrum disorder (ASD). Recent research has evaluated the efficacy of response interruption and redirection (RIRD) in the reduction of vocal stereotypy. Research has indicated that RIRD often results in reductions in the level of vocal stereotypy; however, many previous studies have only presented data on vocal stereotypy that occurred outside RIRD implementation. The current study replicated the procedures of previous studies that have evaluated the efficacy of RIRD and compared 2 data-presentation methods: inclusion of only data collected outside RIRD implementation and inclusion of all vocal stereotypy data from the entirety of each session. Subjects were 7 children who had been diagnosed with ASD. Results indicated that RIRD appeared to be effective when we evaluated the level of vocal stereotypy outside RIRD implementation, but either no reductions or more modest reductions in the level of vocal stereotypy during the entirety of sessions were obtained for all subjects. Results suggest that data-analysis methods used in previous research may overestimate the efficacy of RIRD. © Society for the Experimental Analysis of Behavior.

  14. Capacity limits in list item recognition: evidence from proactive interference.

    PubMed

    Cowan, Nelson; Johnson, Troy D; Saults, J Scott

    2005-01-01

    Capacity limits in short-term recall were investigated using proactive interference (PI) from previous lists in a speeded-recognition task. PI was taken to indicate that the target list length surpassed working memory capacity. Unlike previous studies, words were presented either concurrently or sequentially and a new method was introduced to increase the amount of PI. On average, participants retrieved about four items without PI. We suggest an activation-based account of capacity limits.

  15. Extending the solvent-free MALDI sample preparation method.

    PubMed

    Hanton, Scott D; Parees, David M

    2005-01-01

    Matrix-assisted laser desorption/ionization (MALDI) mass spectrometry is an important technique to characterize many different materials, including synthetic polymers. MALDI mass spectral data can be used to determine the polymer average molecular weights, repeat units, and end groups. One of the key issues in traditional MALDI sample preparation is making good solutions of the analyte and the matrix. Solvent-free sample preparation methods have been developed to address these issues. Previous results of solvent-free or dry prepared samples show some advantages over traditional wet sample preparation methods. Although the results of the published solvent-free sample preparation methods produced excellent mass spectra, we found the method to be very time-consuming, with significant tool cleaning, which presents a significant possibility of cross contamination. To address these issues, we developed an extension of the solvent-free method that replaces the mortar and pestle grinding with ball milling the sample in a glass vial with two small steel balls. This new method generates mass spectra with equal quality of the previous methods, but has significant advantages in productivity, eliminates cross contamination, and is applicable to liquid and soft or waxy analytes.

  16. Forced Suffocation of Infants with Baby Wipes: A Previously Undescribed Form of Child Abuse

    ERIC Educational Resources Information Center

    Krugman, Scott D.; Lantz, Patrick E.; Sinal, Sara; De Jong, Allan R.; Coffman, Kathryn

    2007-01-01

    Background: Foreign body aspiration in children is commonly seen in emergency departments and carries a significant mortality. Abusive foreign body suffocation is not well described. Methods: We present a case-series of four infants who presented with aspiration of a baby wipe. Results: Each child was found to be a victim of child physical abuse…

  17. The Uses of Autobiography. Gender & Society: Feminist Perspectives on the Past and Present.

    ERIC Educational Resources Information Center

    Swindells, Julia, Ed.

    This collection explores the range of uses of autobiography from the 19th century to the present and from Africa, the United States, Middle East, France, New Zealand, and Britain. The chapters draw on a number of approaches, including historical and literary methods. They are frequently about the retrieval and reclamation of previously hidden or…

  18. Analytical and Numerical Results for an Adhesively Bonded Joint Subjected to Pure Bending

    NASA Technical Reports Server (NTRS)

    Smeltzer, Stanley S., III; Lundgren, Eric

    2006-01-01

    A one-dimensional, semi-analytical methodology that was previously developed for evaluating adhesively bonded joints composed of anisotropic adherends and adhesives that exhibit inelastic material behavior is further verified in the present paper. A summary of the first-order differential equations and applied joint loading used to determine the adhesive response from the methodology are also presented. The method was previously verified against a variety of single-lap joint configurations from the literature that subjected the joints to cases of axial tension and pure bending. Using the same joint configuration and applied bending load presented in a study by Yang, the finite element analysis software ABAQUS was used to further verify the semi-analytical method. Linear static ABAQUS results are presented for two models, one with a coarse and one with a fine element meshing, that were used to verify convergence of the finite element analyses. Close agreement between the finite element results and the semi-analytical methodology were determined for both the shear and normal stress responses of the adhesive bondline. Thus, the semi-analytical methodology was successfully verified using the ABAQUS finite element software and a single-lap joint configuration subjected to pure bending.

  19. Solutions to Kuessner's integral equation in unsteady flow using local basis functions

    NASA Technical Reports Server (NTRS)

    Fromme, J. A.; Halstead, D. W.

    1975-01-01

    The computational procedure and numerical results are presented for a new method to solve Kuessner's integral equation in the case of subsonic compressible flow about harmonically oscillating planar surfaces with controls. Kuessner's equation is a linear transformation from pressure to normalwash. The unknown pressure is expanded in terms of prescribed basis functions and the unknown basis function coefficients are determined in the usual manner by satisfying the given normalwash distribution either collocationally or in the complex least squares sense. The present method of solution differs from previous ones in that the basis functions are defined in a continuous fashion over a relatively small portion of the aerodynamic surface and are zero elsewhere. This method, termed the local basis function method, combines the smoothness and accuracy of distribution methods with the simplicity and versatility of panel methods. Predictions by the local basis function method for unsteady flow are shown to be in excellent agreement with other methods. Also, potential improvements to the present method and extensions to more general classes of solutions are discussed.

  20. High-resolution melting (HRM) re-analysis of a polyposis patients cohort reveals previously undetected heterozygous and mosaic APC gene mutations.

    PubMed

    Out, Astrid A; van Minderhout, Ivonne J H M; van der Stoep, Nienke; van Bommel, Lysette S R; Kluijt, Irma; Aalfs, Cora; Voorendt, Marsha; Vossen, Rolf H A M; Nielsen, Maartje; Vasen, Hans F A; Morreau, Hans; Devilee, Peter; Tops, Carli M J; Hes, Frederik J

    2015-06-01

    Familial adenomatous polyposis is most frequently caused by pathogenic variants in either the APC gene or the MUTYH gene. The detection rate of pathogenic variants depends on the severity of the phenotype and sensitivity of the screening method, including sensitivity for mosaic variants. For 171 patients with multiple colorectal polyps without previously detectable pathogenic variant, APC was reanalyzed in leukocyte DNA by one uniform technique: high-resolution melting (HRM) analysis. Serial dilution of heterozygous DNA resulted in a lowest detectable allelic fraction of 6% for the majority of variants. HRM analysis and subsequent sequencing detected pathogenic fully heterozygous APC variants in 10 (6%) of the patients and pathogenic mosaic variants in 2 (1%). All these variants were previously missed by various conventional scanning methods. In parallel, HRM APC scanning was applied to DNA isolated from polyp tissue of two additional patients with apparently sporadic polyposis and without detectable pathogenic APC variant in leukocyte DNA. In both patients a pathogenic mosaic APC variant was present in multiple polyps. The detection of pathogenic APC variants in 7% of the patients, including mosaics, illustrates the usefulness of a complete APC gene reanalysis of previously tested patients, by a supplementary scanning method. HRM is a sensitive and fast pre-screening method for reliable detection of heterozygous and mosaic variants, which can be applied to leukocyte and polyp derived DNA.

  1. Retrieval of constituent mixing ratios from limb thermal emission spectra

    NASA Technical Reports Server (NTRS)

    Shaffer, William A.; Kunde, Virgil G.; Conrath, Barney J.

    1988-01-01

    An onion-peeling iterative, least-squares relaxation method to retrieve mixing ratio profiles from limb thermal emission spectra is presented. The method has been tested on synthetic data, containing various amounts of added random noise for O3, HNO3, and N2O. The retrieval method is used to obtain O3 and HNO3 mixing ratio profiles from high-resolution thermal emission spectra. Results of the retrievals compare favorably with those obtained previously.

  2. Laterally constrained inversion for CSAMT data interpretation

    NASA Astrophysics Data System (ADS)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  3. Extensive numerical study of a D-brane, anti-D-brane system in AdS 5 /CFT 4

    NASA Astrophysics Data System (ADS)

    Hegedűs, Árpád

    2015-04-01

    In this paper the hybrid-NLIE approach of [38] is extended to the ground state of a D-brane anti-D-brane system in AdS/CFT. The hybrid-NLIE equations presented in the paper are finite component alternatives of the previously proposed TBA equations and they admit an appropriate framework for the numerical investigation of the ground state of the problem. Straightforward numerical iterative methods fail to converge, thus new numerical methods are worked out to solve the equations. Our numerical data confirm the previous TBA data. In view of the numerical results the mysterious L = 1 case is also commented in the paper.

  4. Determination of Carbonyl Functional Groups in Bio-oils by Potentiometric Titration: The Faix Method.

    PubMed

    Black, Stuart; Ferrell, Jack R

    2017-02-07

    Carbonyl compounds present in bio-oils are known to be responsible for bio-oil property changes upon storage and during upgrading. Specifically, carbonyls cause an increase in viscosity (often referred to as 'aging') during storage of bio-oils. As such, carbonyl content has previously been used as a method of tracking bio-oil aging and condensation reactions with less variability than viscosity measurements. Additionally, carbonyls are also responsible for coke formation in bio-oil upgrading processes. Given the importance of carbonyls in bio-oils, accurate analytical methods for their quantification are very important for the bio-oil community. Potentiometric titration methods based on carbonyl oximation have long been used for the determination of carbonyl content in pyrolysis bio-oils. Here, we present a modification of the traditional carbonyl oximation procedures that results in less reaction time, smaller sample size, higher precision, and more accurate carbonyl determinations. While traditional carbonyl oximation methods occur at room temperature, the Faix method presented here occurs at an elevated temperature of 80 °C.

  5. The Schwinger Variational Method

    NASA Technical Reports Server (NTRS)

    Huo, Winifred M.

    1995-01-01

    Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. The application of the Schwinger variational (SV) method to e-molecule collisions and molecular photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions. Since this is not a review of cross section data, cross sections are presented only to server as illustrative examples. In the SV method, the correct boundary condition is automatically incorporated through the use of Green's function. Thus SV calculations can employ basis functions with arbitrary boundary conditions. The iterative Schwinger method has been used extensively to study molecular photoionization. For e-molecule collisions, it is used at the static exchange level to study elastic scattering and coupled with the distorted wave approximation to study electronically inelastic scattering.

  6. Resolved-particle simulation by the Physalis method: Enhancements and new capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierakowski, Adam J., E-mail: sierakowski@jhu.edu; Prosperetti, Andrea; Faculty of Science and Technology and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede

    2016-03-15

    We present enhancements and new capabilities of the Physalis method for simulating disperse multiphase flows using particle-resolved simulation. The current work enhances the previous method by incorporating a new type of pressure-Poisson solver that couples with a new Physalis particle pressure boundary condition scheme and a new particle interior treatment to significantly improve overall numerical efficiency. Further, we implement a more efficient method of calculating the Physalis scalar products and incorporate short-range particle interaction models. We provide validation and benchmarking for the Physalis method against experiments of a sedimenting particle and of normal wall collisions. We conclude with an illustrativemore » simulation of 2048 particles sedimenting in a duct. In the appendix, we present a complete and self-consistent description of the analytical development and numerical methods.« less

  7. Degradation of learned skills. Static practice effectiveness for visual approach and landing skill retention

    NASA Technical Reports Server (NTRS)

    Sitterley, T. E.

    1974-01-01

    The effectivess of an improved static retraining method was evaluated for a simulated space vehicle approach and landing under instrument and visual flight conditions. Experienced pilots were trained and then tested after 4 months without flying to compare their performance using the improved method with three methods previously evaluated. Use of the improved static retraining method resulted in no practical or significant skill degradation and was found to be even more effective than methods using a dynamic presentation of visual cues. The results suggested that properly structured open loop methods of flight control task retraining are feasible.

  8. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: IV. the Pause Marker Index

    ERIC Educational Resources Information Center

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: Three previous articles provided rationale, methods, and several forms of validity support for a diagnostic marker of childhood apraxia of speech (CAS), termed the pause marker (PM). Goals of the present article were to assess the validity and stability of the PM Index (PMI) to scale CAS severity. Method: PM scores and speech, prosody,…

  9. Total Quality Management: Statistics and Graphics III - Experimental Design and Taguchi Methods. AIR 1993 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Schwabe, Robert A.

    Interest in Total Quality Management (TQM) at institutions of higher education has been stressed in recent years as an important area of activity for institutional researchers. Two previous AIR Forum papers have presented some of the statistical and graphical methods used for TQM. This paper, the third in the series, first discusses some of the…

  10. New method for qualitative simulations of water resources systems. 2. Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antunes, M.P.; Seixas, M.J.; Camara, A.S.

    1987-11-01

    SLIN (Simulacao Linguistica) is a new method for qualitative dynamic simulation. As was presented previously, SLIN relies upon a categorical representation of variables which are manipulated by logical rules. Two applications to water resources systems are included to illustrate SLIN's potential usefulness: the environmental impact evaluation of a hydropower plant and the assessment of oil dispersion in the sea after a tanker wreck.

  11. Blood transport method for chromosome analysis of residents living near Semipalatinsk nuclear test site.

    PubMed

    Rodzi, Mohd; Ihda, Shozo; Yokozeki, Masako; Takeichi, Nobuo; Tanaka, Kimio; Hoshi, Masaharu

    2009-12-01

    A study was conducted to compare the storage conditions and transportation period for blood samples collected from residents living in areas near the Semipalatinsk nuclear test site (SNTS). Experiments were performed to simulate storage and shipping environments. Phytohaemagglutinin (PHA)-stimulated blood was stored in 15-ml tubes (condition A: current transport method) in the absence or in 50-ml flasks (condition B: previous transport method) in the presence of RPMI-1640 and 20% fetal bovine serum (FBS). Samples were kept refrigerated at 4 degrees C and cell viability was assessed after 3, 8, 12 and 14 days of storage. RPMI-1640, 20% FBS and further PHA were added to blood samples under condition A in 50-ml flasks for culture. Whole-blood samples under condition B were directly incubated without further sub-culturing process, neither media nor PHA were added, to adopt a similar protocol to that employed in the previous transport method. Samples in condition A and condition B were incubated for 48 hr at 37 degrees C and their mitotic index was determined. The results showed that viable lymphocytes were consistent in both storage conditions but the mitotic index was higher in condition A than in condition B. Although further confirmation studies have to be carried out, previous chromosomal studies and the present experiment have shown that PHA-stimulated blood could be stored without culture medium for up to 8 days under condition A. The present results will be useful for cytogenetic analysis of blood samples that have been transported long distances wherever a radiation accident has occurred.

  12. High-frequency techniques for RCS prediction of plate geometries

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Polka, Lesley A.

    1992-01-01

    The principal-plane scattering from perfectly conducting and coated strips and rectangular plates is examined. Previous reports have detailed Geometrical Theory of Diffraction/Uniform Theory of Diffraction (GTD/UTD) solutions for these geometries. The GTD/UTD solution for the perfectly conducting plate yields monostatic radar cross section (RCS) results that are nearly identical to measurements and results obtained using the Moment Method (MM) and the Extended Physical Theory of Diffraction (EPTD). This was demonstrated in previous reports. The previous analysis is extended to bistatic cases. GTD/UTD results for the principal-plane scattering from a perfectly conducting, infinite strip are compared to MM and EPTD data. A comprehensive overview of the advantages and disadvantages of the GTD/UTD and of the EPTD and a detailed analysis of the results from both methods are provided. Several previous reports also presented preliminary discussions and results for a GTD/UTD model of the RCS of a coated, rectangular plate. Several approximations for accounting for the finite coating thickness, plane-wave incidence, and far-field observation were discussed. Here, these approximations are replaced by a revised wedge diffraction coefficient that implicitly accounts for a coating on a perfect conductor, plane-wave incidence, and far-field observation. This coefficient is computationally more efficient than the previous diffraction coefficient because the number of Maliuzhinets functions that must be calculated using numerical integration is reduced by a factor of 2. The derivation and the revised coefficient are presented in detail for the hard polarization case. Computations and experimental data are also included. The soft polarization case is currently under investigation.

  13. Mixed convection flow of viscoelastic fluid by a stretching cylinder with heat transfer.

    PubMed

    Hayat, Tasawar; Anwar, Muhammad Shoaib; Farooq, Muhammad; Alsaedi, Ahmad

    2015-01-01

    Flow of viscoelastic fluid due to an impermeable stretching cylinder is discussed. Effects of mixed convection and variable thermal conductivity are present. Thermal conductivity is taken temperature dependent. Nonlinear partial differential system is reduced into the nonlinear ordinary differential system. Resulting nonlinear system is computed for the convergent series solutions. Numerical values of skin friction coefficient and Nusselt number are computed and discussed. The results obtained with the current method are in agreement with previous studies using other methods as well as theoretical ideas. Physical interpretation reflecting the contribution of influential parameters in the present flow is presented. It is hoped that present study serves as a stimulus for modeling further stretching flows especially in polymeric and paper production processes.

  14. Mixed Convection Flow of Viscoelastic Fluid by a Stretching Cylinder with Heat Transfer

    PubMed Central

    Hayat, Tasawar; Anwar, Muhammad Shoaib; Farooq, Muhammad; Alsaedi, Ahmad

    2015-01-01

    Flow of viscoelastic fluid due to an impermeable stretching cylinder is discussed. Effects of mixed convection and variable thermal conductivity are present. Thermal conductivity is taken temperature dependent. Nonlinear partial differential system is reduced into the nonlinear ordinary differential system. Resulting nonlinear system is computed for the convergent series solutions. Numerical values of skin friction coefficient and Nusselt number are computed and discussed. The results obtained with the current method are in agreement with previous studies using other methods as well as theoretical ideas. Physical interpretation reflecting the contribution of influential parameters in the present flow is presented. It is hoped that present study serves as a stimulus for modeling further stretching flows especially in polymeric and paper production processes. PMID:25775032

  15. Learning-based controller for biotechnology processing, and method of using

    DOEpatents

    Johnson, John A.; Stoner, Daphne L.; Larsen, Eric D.; Miller, Karen S.; Tolle, Charles R.

    2004-09-14

    The present invention relates to process control where some of the controllable parameters are difficult or impossible to characterize. The present invention relates to process control in biotechnology of such systems, but not limited to. Additionally, the present invention relates to process control in biotechnology minerals processing. In the inventive method, an application of the present invention manipulates a minerals bioprocess to find local exterma (maxima or minima) for selected output variables/process goals by using a learning-based controller for bioprocess oxidation of minerals during hydrometallurgical processing. The learning-based controller operates with or without human supervision and works to find processor optima without previously defined optima due to the non-characterized nature of the process being manipulated.

  16. International Issues in Education

    ERIC Educational Resources Information Center

    Ruggeri, Kai; Diaz, Carmen; Kelley, Karl; Papousek, Ilona; Dempster, Martin; Hanna, Donncha

    2008-01-01

    Anxiety, negative attitudes, and attrition are all issues presented in the teaching of statistics to undergraduates in research-based degrees regardless of location. Previous works have looked at these obstacles, but none have consolidated a multilingual, multinational effort using a consistent method. Over 400 Spanish-, English-, and…

  17. How to Experience the Unitive Life.

    ERIC Educational Resources Information Center

    Maslow, Abraham H.

    1991-01-01

    Presents previously unpublished paper written by Abraham Maslow during the mid-1960s in which he outlined dozens of methods for overcoming ennui and "burnout" thereby enriching peoples' lives. Discusses the realm of honesty, pride, and dignity and entering the being-realm. (Author/ABL)

  18. Skeletal Mechanism Generation of Surrogate Jet Fuels for Aeropropulsion Modeling

    NASA Astrophysics Data System (ADS)

    Sung, Chih-Jen; Niemeyer, Kyle E.

    2010-05-01

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with skeletal reductions of two important hydrocarbon components, n-heptane and n-decane, relevant to surrogate jet fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each previous method, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal.

  19. An improvement of convergence of a dispersion-relation preserving method for the classical Boussinesq equation

    NASA Astrophysics Data System (ADS)

    Jang, T. S.

    2018-03-01

    A dispersion-relation preserving (DRP) method, as a semi-analytic iterative procedure, has been proposed by Jang (2017) for integrating the classical Boussinesq equation. It has been shown to be a powerful numerical procedure for simulating a nonlinear dispersive wave system because it preserves the dispersion-relation, however, there still exists a potential flaw, e.g., a restriction on nonlinear wave amplitude and a small region of convergence (ROC) and so on. To remedy the flaw, a new DRP method is proposed in this paper, aimed at improving convergence performance. The improved method is proved to have convergence properties and dispersion-relation preserving nature for small waves; of course, unique existence of the solutions is also proved. In addition, by a numerical experiment, the method is confirmed to be good at observing nonlinear wave phenomena such as moving solitary waves and their binary collision with different wave amplitudes. Especially, it presents a ROC (much) wider than that of the previous method by Jang (2017). Moreover, it gives the numerical simulation of a high (or large-amplitude) nonlinear dispersive wave. In fact, it is demonstrated to simulate a large-amplitude solitary wave and the collision of two solitary waves with large-amplitudes that we have failed to simulate with the previous method. Conclusively, it is worth noting that better convergence results are achieved compared to Jang (2017); i.e., they represent a major improvement in practice over the previous method.

  20. Impacts of Outer Continental Shelf (OCS) development on recreation and tourism. Volume 3. Detailed methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The final report for the project is presented in five volumes. This volume, Detailed Methodology Review, presents a discussion of the methods considered and used to estimate the impacts of Outer Continental Shelf (OCS) oil and gas development on coastal recreation in California. The purpose is to provide the Minerals Management Service with data and methods to improve their ability to analyze the socio-economic impacts of OCS development. Chapter II provides a review of previous attempts to evaluate the effects of OCS development and of oil spills on coastal recreation. The review also discusses the strengths and weaknesses of differentmore » approaches and presents the rationale for the methodology selection made. Chapter III presents a detailed discussion of the methods actually used in the study. The volume contains the bibliography for the entire study.« less

  1. Learning Motion Features for Example-Based Finger Motion Estimation for Virtual Characters

    NASA Astrophysics Data System (ADS)

    Mousas, Christos; Anagnostopoulos, Christos-Nikolaos

    2017-09-01

    This paper presents a methodology for estimating the motion of a character's fingers based on the use of motion features provided by a virtual character's hand. In the presented methodology, firstly, the motion data is segmented into discrete phases. Then, a number of motion features are computed for each motion segment of a character's hand. The motion features are pre-processed using restricted Boltzmann machines, and by using the different variations of semantically similar finger gestures in a support vector machine learning mechanism, the optimal weights for each feature assigned to a metric are computed. The advantages of the presented methodology in comparison to previous solutions are the following: First, we automate the computation of optimal weights that are assigned to each motion feature counted in our metric. Second, the presented methodology achieves an increase (about 17%) in correctly estimated finger gestures in comparison to a previous method.

  2. Learning process mapping heuristics under stochastic sampling overheads

    NASA Technical Reports Server (NTRS)

    Ieumwananonthachai, Arthur; Wah, Benjamin W.

    1991-01-01

    A statistical method was developed previously for improving process mapping heuristics. The method systematically explores the space of possible heuristics under a specified time constraint. Its goal is to get the best possible heuristics while trading between the solution quality of the process mapping heuristics and their execution time. The statistical selection method is extended to take into consideration the variations in the amount of time used to evaluate heuristics on a problem instance. The improvement in performance is presented using the more realistic assumption along with some methods that alleviate the additional complexity.

  3. Multishaker modal testing

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.

    1985-01-01

    A component mode synthesis method for damped structures was developed and modal test methods were explored which could be employed to determine the relevant parameters required by the component mode synthesis method. Research was conducted on the following topics: (1) Development of a generalized time-domain component mode synthesis technique for damped systems; (2) Development of a frequency-domain component mode synthesis method for damped systems; and (3) Development of a system identification algorithm applicable to general damped systems. Abstracts are presented of the major publications which have been previously issued on these topics.

  4. The method of lines in three dimensional fracture mechanics

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J.; Berke, L.

    1980-01-01

    A review of recent developments in the calculation of design parameters for fracture mechanics by the method of lines (MOL) is presented. Three dimensional elastic and elasto-plastic formulations are examined and results from previous and current research activities are reported. The application of MOL to the appropriate partial differential equations of equilibrium leads to coupled sets of simultaneous ordinary differential equations. Solutions of these equations are obtained by the Peano-Baker and by the recurrance relations methods. The advantages and limitations of both solution methods from the computational standpoint are summarized.

  5. Application of probabilistic analysis/design methods in space programs - The approaches, the status, and the needs

    NASA Technical Reports Server (NTRS)

    Ryan, Robert S.; Townsend, John S.

    1993-01-01

    The prospective improvement of probabilistic methods for space program analysis/design entails the further development of theories, codes, and tools which match specific areas of application, the drawing of lessons from previous uses of probability and statistics data bases, the enlargement of data bases (especially in the field of structural failures), and the education of engineers and managers on the advantages of these methods. An evaluation is presently made of the current limitations of probabilistic engineering methods. Recommendations are made for specific applications.

  6. Time delayed Ensemble Nudging Method

    NASA Astrophysics Data System (ADS)

    An, Zhe; Abarbanel, Henry

    Optimal nudging method based on time delayed embedding theory has shows potentials on analyzing and data assimilation in previous literatures. To extend the application and promote the practical implementation, new nudging assimilation method based on the time delayed embedding space is presented and the connection with other standard assimilation methods are studied. Results shows the incorporating information from the time series of data can reduce the sufficient observation needed to preserve the quality of numerical prediction, making it a potential alternative in the field of data assimilation of large geophysical models.

  7. Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Chen, Jen-Ping

    2012-01-01

    This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

  8. Effective classroom teaching methods: a critical incident technique from millennial nursing students' perspective.

    PubMed

    Robb, Meigan

    2014-01-11

    Engaging nursing students in the classroom environment positively influences their ability to learn and apply course content to clinical practice. Students are motivated to engage in learning if their learning preferences are being met. The methods nurse educators have used with previous students in the classroom may not address the educational needs of Millennials. This manuscript presents the findings of a pilot study that used the Critical Incident Technique. The purpose of this study was to gain insight into the teaching methods that help the Millennial generation of nursing students feel engaged in the learning process. Students' perceptions of effective instructional approaches are presented in three themes. Implications for nurse educators are discussed.

  9. Direct 2-D reconstructions of conductivity and permittivity from EIT data on a human chest.

    PubMed

    Herrera, Claudia N L; Vallejo, Miguel F M; Mueller, Jennifer L; Lima, Raul G

    2015-01-01

    A novel direct D-bar reconstruction algorithm is presented for reconstructing a complex conductivity distribution from 2-D EIT data. The method is applied to simulated data and archival human chest data. Permittivity reconstructions with the aforementioned method and conductivity reconstructions with the previously existing nonlinear D-bar method for real-valued conductivities depicting ventilation and perfusion in the human chest are presented. This constitutes the first fully nonlinear D-bar reconstructions of human chest data and the first D-bar permittivity reconstructions of experimental data. The results of the human chest data reconstructions are compared on a circular domain versus a chest-shaped domain.

  10. Comment on "Optical-fiber-based Mueller optical coherence tomography".

    PubMed

    Park, B Hyle; Pierce, Mark C; de Boer, Johannes F

    2004-12-15

    We comment on the recent Letter by Jiao et al. [Opt. Lett. 28, 1206 (2003)] in which a polarization-sensitive optical coherence tomography system was presented. Interrogating a sample with two orthogonal incident polarization states cannot always recover birefringence correctly. A previously presented fiber-based polarization-sensitive system was inaccurately characterized, and its method of eliminating the polarization distortion caused by single-mode optical fiber was presented earlier by Saxer et al. [Opt. Lett. 25, 1355 (2000)].

  11. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    PubMed

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  12. Application of 3-signal coherence to core noise transmission

    NASA Technical Reports Server (NTRS)

    Krejsa, E. A.

    1983-01-01

    A method for determining transfer functions across turbofan engine components and from the engine to the far-field is developed. The method is based on the three-signal coherence technique used previously to obtain far-field core noise levels. This method eliminates the bias error in transfer function measurements due to contamination of measured pressures by nonpropagating pressure fluctuations. Measured transfer functions from the engine to the far-field, across the tailpipe, and across the turbine are presented for three turbofan engines.

  13. Thermodynamic integration from classical to quantum mechanics.

    PubMed

    Habershon, Scott; Manolopoulos, David E

    2011-12-14

    We present a new method for calculating quantum mechanical corrections to classical free energies, based on thermodynamic integration from classical to quantum mechanics. In contrast to previous methods, our method is numerically stable even in the presence of strong quantum delocalization. We first illustrate the method and its relationship to a well-established method with an analysis of a one-dimensional harmonic oscillator. We then show that our method can be used to calculate the quantum mechanical contributions to the free energies of ice and water for a flexible water model, a problem for which the established method is unstable. © 2011 American Institute of Physics

  14. Motion compensated shape error concealment.

    PubMed

    Schuster, Guido M; Katsaggelos, Aggelos K

    2006-02-01

    The introduction of Video Objects (VOs) is one of the innovations of MPEG-4. The alpha-plane of a VO defines its shape at a given instance in time and hence determines the boundary of its texture. In packet-based networks, shape, motion, and texture are subject to loss. While there has been considerable attention paid to the concealment of texture and motion errors, little has been done in the field of shape error concealment. In this paper we propose a post-processing shape error concealment technique that uses the motion compensated boundary information of the previously received alpha-plane. The proposed approach is based on matching received boundary segments in the current frame to the boundary in the previous frame. This matching is achieved by finding a maximally smooth motion vector field. After the current boundary segments are matched to the previous boundary, the missing boundary pieces are reconstructed by motion compensation. Experimental results demonstrating the performance of the proposed motion compensated shape error concealment method, and comparing it with the previously proposed weighted side matching method are presented.

  15. Smoothing of climate time series revisited

    NASA Astrophysics Data System (ADS)

    Mann, Michael E.

    2008-08-01

    We present an easily implemented method for smoothing climate time series, generalizing upon an approach previously described by Mann (2004). The method adaptively weights the three lowest order time series boundary constraints to optimize the fit with the raw time series. We apply the method to the instrumental global mean temperature series from 1850-2007 and to various surrogate global mean temperature series from 1850-2100 derived from the CMIP3 multimodel intercomparison project. These applications demonstrate that the adaptive method systematically out-performs certain widely used default smoothing methods, and is more likely to yield accurate assessments of long-term warming trends.

  16. Adult Learning Principles and Presentation Pearls

    PubMed Central

    Palis, Ana G.; Quiros, Peter A.

    2014-01-01

    Although lectures are one of the most common methods of knowledge transfer in medicine, their effectiveness has been questioned. Passive formats, lack of relevance and disconnection from the student's needs are some of the arguments supporting this apparent lack of efficacy. However, many authors have suggested that applying adult learning principles (i.e., relevance, congruence with student's needs, interactivity, connection to student's previous knowledge and experience) to this method increases learning by lectures and the effectiveness of lectures. This paper presents recommendations for applying adult learning principles during planning, creation and development of lectures to make them more effective. PMID:24791101

  17. Stress analysis of ribbon parachutes

    NASA Technical Reports Server (NTRS)

    Reynolds, D. T.; Mullins, W. M.

    1975-01-01

    An analytical method has been developed for determining the internal load distribution for ribbon parachutes subjected to known riser and aerodynamic forces. Finite elements with non-linear elastic properties represent the parachute structure. This method is an extension of the analysis previously developed by the authors and implemented in the digital computer program CANO. The present analysis accounts for the effect of vertical ribbons in the solution for canopy shape and stress distribution. Parametric results are presented which relate the canopy stress distribution to such factors as vertical ribbon strength, number of gores, and gore shape in a ribbon parachute.

  18. Determination of stresses in gas-turbine disks subjected to plastic flow and creep

    NASA Technical Reports Server (NTRS)

    Millenson, M B; Manson, S S

    1948-01-01

    A finite-difference method previously presented for computing elastic stresses in rotating disks is extended to include the computation of the disk stresses when plastic flow and creep are considered. A finite-difference method is employed to eliminate numerical integration and to permit nontechnical personnel to make the calculations with a minimum of engineering supervision. Illustrative examples are included to facilitate explanation of the procedure by carrying out the computations on a typical gas-turbine disk through a complete running cycle. The results of the numerical examples presented indicate that plastic flow markedly alters the elastic-stress distribution.

  19. Higher Order Corrections in the CoLoRFulNNLO Framework

    NASA Astrophysics Data System (ADS)

    Somogyi, G.; Kardos, A.; Szőr, Z.; Trócsányi, Z.

    We discuss the CoLoRFulNNLO method for computing higher order radiative corrections to jet cross sections in perturbative QCD. We apply our method to the calculation of event shapes and jet rates in three-jet production in electron-positron annihilation. We validate our code by comparing our predictions to previous results in the literature and present the jet cone energy fraction distribution at NNLO accuracy. We also present preliminary NNLO results for the three-jet rate using the Durham jet clustering algorithm matched to resummed predictions at NLL accuracy, and a comparison to LEP data.

  20. Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.

    PubMed

    Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A

    2013-11-01

    We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.

  1. A Step-by-Step Picture of Pulsed (Time-Domain) NMR.

    ERIC Educational Resources Information Center

    Schwartz, Leslie J.

    1988-01-01

    Discusses a method for teaching time pulsed NMR principals that are as simple and pictorial as possible. Uses xyz coordinate figures and presents theoretical explanations using a Fourier transformation spectrum. Assumes no previous knowledge of quantum mechanics for students. Usable for undergraduates. (MVL)

  2. Maximal use of kinematic information for the extraction of the mass of the top quark in single-lepton tt bar events at DO

    NASA Astrophysics Data System (ADS)

    Estrada Vigil, Juan Cruz

    The mass of the top (t) quark has been measured in the lepton+jets channel of tt¯ final states studied by the DØ and CDF experiments at Fermilab using data from Run I of the Tevatron pp¯ collider. The result published by DØ is 173.3 +/- 5.6(stat) +/- 5.5(syst) GeV. We present a different method to perform this measurement using the existing data. The new technique uses all available kinematic information in an event, and provides a significantly smaller statistical uncertainty than achieved in previous analyses. The preliminary results presented in this thesis indicate a statistical uncertainty for the extracted mass of the top quark of 3.5 GeV, which represents a significant improvement over the previous value of 5.6 GeV. The method of analysis is very general, and may be particularly useful in situations where there is a small signal and a large background.

  3. Inferring HIV Escape Rates from Multi-Locus Genotype Data

    DOE PAGES

    Kessinger, Taylor A.; Perelson, Alan S.; Neher, Richard A.

    2013-09-03

    Cytotoxic T-lymphocytes (CTLs) recognize viral protein fragments displayed by major histocompatibility complex molecules on the surface of virally infected cells and generate an anti-viral response that can kill the infected cells. Virus variants whose protein fragments are not efficiently presented on infected cells or whose fragments are presented but not recognized by CTLs therefore have a competitive advantage and spread rapidly through the population. We present a method that allows a more robust estimation of these escape rates from serially sampled sequence data. The proposed method accounts for competition between multiple escapes by explicitly modeling the accumulation of escape mutationsmore » and the stochastic effects of rare multiple mutants. Applying our method to serially sampled HIV sequence data, we estimate rates of HIV escape that are substantially larger than those previously reported. The method can be extended to complex escapes that require compensatory mutations. We expect our method to be applicable in other contexts such as cancer evolution where time series data is also available.« less

  4. Two dimensional wavefront retrieval using lateral shearing interferometry

    NASA Astrophysics Data System (ADS)

    Mancilla-Escobar, B.; Malacara-Hernández, Z.; Malacara-Hernández, D.

    2018-06-01

    A new zonal two-dimensional method for wavefront retrieval from a surface under test using lateral shearing interferometry is presented. A modified Saunders method and phase shifting techniques are combined to generate a method for wavefront reconstruction. The result is a wavefront with an error below 0.7 λ and without any global high frequency filtering. A zonal analysis over square cells along the surfaces is made, obtaining a polynomial expression for the wavefront deformations over each cell. The main advantage of this method over previously published methods is that a global filtering of high spatial frequencies is not present. Thus, a global smoothing of the wavefront deformations is avoided, allowing the detection of deformations with relatively small extensions, that is, with high spatial frequencies. Additionally, local curvature and low order aberration coefficients are obtained in each cell.

  5. An exact noniterative linear method for locating sources based on measuring receiver arrival times.

    PubMed

    Militello, C; Buenafuente, S R

    2007-06-01

    In this paper an exact, linear solution to the source localization problem based on the time of arrival at the receivers is presented. The method is unique in that the source's position can be obtained by solving a system of linear equations, three for a plane and four for a volume. This simplification means adding an additional receiver to the minimum mathematically required (3+1 in two dimensions and 4+1 in three dimensions). The equations are easily worked out for any receiver configuration and their geometrical interpretation is straightforward. Unlike other methods, the system of reference used to describe the receivers' positions is completely arbitrary. The relationship between this method and previously published ones is discussed, showing how the present, more general, method overcomes nonlinearity and unknown dependency issues.

  6. Calculation of photoionization cross section near auto-ionizing lines and magnesium photoionization cross section near threshold

    NASA Technical Reports Server (NTRS)

    Moore, E. N.; Altick, P. L.

    1972-01-01

    The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.

  7. Do Students Who Get Low Grades Only in Research Methods Need the Same Help as Students Who Get Low Grades in All Topics in Psychology?

    ERIC Educational Resources Information Center

    Barry, John A.

    2012-01-01

    Some psychology students achieve high grades in all classes except for research methods (RM). Previous research has usually treated low levels of achievement in RM as a unitary phenomenon, without reference to the grades the student is achieving in other subjects. The present internet survey explored preferences for learning RM in 140 psychology…

  8. Solid state synthesis of poly(dichlorophosphazene)

    DOEpatents

    Allen, Christopher W.; Hneihen, Azzam S.; Peterson, Eric S.

    2001-01-01

    A method for making poly(dichlorophosphazene) using solid state reactants is disclosed and described. The present invention improves upon previous methods by removing the need for chlorinated hydrocarbon solvents, eliminating complicated equipment and simplifying the overall process by providing a "single pot" two step reaction sequence. This may be accomplished by the condensation reaction of raw materials in the melt phase of the reactants and in the absence of an environmentally damaging solvent.

  9. [A brief history of resuscitation - the influence of previous experience on modern techniques and methods].

    PubMed

    Kucmin, Tomasz; Płowaś-Goral, Małgorzata; Nogalski, Adam

    2015-02-01

    Cardiopulmonary resuscitation (CPR) is relatively novel branch of medical science, however first descriptions of mouth-to-mouth ventilation are to be found in the Bible and literature is full of descriptions of different resuscitation methods - from flagellation and ventilation with bellows through hanging the victims upside down and compressing the chest in order to stimulate ventilation to rectal fumigation with tobacco smoke. The modern history of CPR starts with Kouwenhoven et al. who in 1960 published a paper regarding heart massage through chest compressions. Shortly after that in 1961Peter Safar presented a paradigm promoting opening the airway, performing rescue breaths and chest compressions. First CPR guidelines were published in 1966. Since that time guidelines were modified and improved numerously by two leading world expert organizations ERC (European Resuscitation Council) and AHA (American Heart Association) and published in a new version every 5 years. Currently 2010 guidelines should be obliged. In this paper authors made an attempt to present history of development of resuscitation techniques and methods and assess the influence of previous lifesaving methods on nowadays technologies, equipment and guidelines which allow to help those women and men whose life is in danger due to sudden cardiac arrest. © 2015 MEDPRESS.

  10. Wave optics theory and 3-D deconvolution for the light field microscope

    PubMed Central

    Broxton, Michael; Grosenick, Logan; Yang, Samuel; Cohen, Noy; Andalman, Aaron; Deisseroth, Karl; Levoy, Marc

    2013-01-01

    Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method. PMID:24150383

  11. Determination of free sulphydryl groups in wheat gluten under the influence of different time and temperature of incubation: method validation.

    PubMed

    Rakita, Slađana; Pojić, Milica; Tomić, Jelena; Torbica, Aleksandra

    2014-05-01

    The aim of the present study was to determine the characteristics of an analytical method for determination of free sulphydryl (SH) groups of wheat gluten performed with previous gluten incubation for variable times (45, 90 and 135min) at variable temperatures (30 and 37°C), in order to determine its fitness-for-purpose. It was observed that the increase in temperature and gluten incubation time caused the increase in the amount of free SH groups, with more dynamic changes at 37°C. The method characteristics identified as relevant were: linearity, limit of detection, limit of quantification, precision (repeatability and reproducibility) and measurement uncertainty, which were checked within the validation protocol, while the method performance was monitored by X- and R-control charts. Identified method characteristics demonstrated its acceptable fitness-for-purpose, when assay included previous gluten incubation at 30°C. Although the method repeatability at 37°C was acceptable, the corresponding reproducibility did not meet the performance criterion on the basis of HORRAT value (HORRAT<2). Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  13. Statistical Algorithms Accounting for Background Density in the Detection of UXO Target Areas at DoD Munitions Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzke, Brett D.; Wilson, John E.; Hathaway, J.

    2008-02-12

    Statistically defensible methods are presented for developing geophysical detector sampling plans and analyzing data for munitions response sites where unexploded ordnance (UXO) may exist. Detection methods for identifying areas of elevated anomaly density from background density are shown. Additionally, methods are described which aid in the choice of transect pattern and spacing to assure with degree of confidence that a target area (TA) of specific size, shape, and anomaly density will be identified using the detection methods. Methods for evaluating the sensitivity of designs to variation in certain parameters are also discussed. Methods presented have been incorporated into the Visualmore » Sample Plan (VSP) software (free at http://dqo.pnl.gov/vsp) and demonstrated at multiple sites in the United States. Application examples from actual transect designs and surveys from the previous two years are demonstrated.« less

  14. Blurred image recognition by legendre moment invariants

    PubMed Central

    Zhang, Hui; Shu, Huazhong; Han, Guo-Niu; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis

    2010-01-01

    Processing blurred images is a key problem in many image applications. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments. In this paper, we propose a new method to construct a set of blur invariants using the orthogonal Legendre moments. Some important properties of Legendre moments for the blurred image are presented and proved. The performance of the proposed descriptors is evaluated with various point-spread functions and different image noises. The comparison of the present approach with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments. PMID:19933003

  15. Convergence of methods for coupling of microscopic and mesoscopic reaction-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Flegg, Mark B.; Hellander, Stefan; Erban, Radek

    2015-05-01

    In this paper, three multiscale methods for coupling of mesoscopic (compartment-based) and microscopic (molecular-based) stochastic reaction-diffusion simulations are investigated. Two of the three methods that will be discussed in detail have been previously reported in the literature; the two-regime method (TRM) and the compartment-placement method (CPM). The third method that is introduced and analysed in this paper is called the ghost cell method (GCM), since it works by constructing a "ghost cell" in which molecules can disappear and jump into the compartment-based simulation. Presented is a comparison of sources of error. The convergent properties of this error are studied as the time step Δt (for updating the molecular-based part of the model) approaches zero. It is found that the error behaviour depends on another fundamental computational parameter h, the compartment size in the mesoscopic part of the model. Two important limiting cases, which appear in applications, are considered: Δt → 0 and h is fixed; Δt → 0 and h → 0 such that √{ Δt } / h is fixed. The error for previously developed approaches (the TRM and CPM) converges to zero only in the limiting case (ii), but not in case (i). It is shown that the error of the GCM converges in the limiting case (i). Thus the GCM is superior to previous coupling techniques if the mesoscopic description is much coarser than the microscopic part of the model.

  16. Ideal-Magnetohydrodynamic-Stable Tilting in Field-Reversed Configurations

    NASA Astrophysics Data System (ADS)

    Kanno, Ryutaro; Ishida, Akio; Steinhauer, Loren

    1995-02-01

    The tilting mode in field-reversed configurations (FRC) is examined using ideal-magnetohydrodynamic stability theory. Tilting, a global mode, is the greatest threat for disruption of FRC confinement. Previous studies uniformly found tilting to be unstable in ideal theory: the objective here is to ascertain if stable equilibria were overlooked in past work. Solving the variational problem with the Rayleigh-Ritz technique, tilting-stable equilibria are found for sufficiently hollow current profile and sufficient racetrackness of the separatrix shape. Although these equilibria were not examined previously, the present conclusion is quite surprising. Consequently checks of the method are offered. Even so it cannot yet be claimed with complete certainty that stability has been proved: absolute confirmation of ideal-stable tilting awaits the application of more complete methods.

  17. Prediction of binding hot spot residues by using structural and evolutionary parameters.

    PubMed

    Higa, Roberto Hiroshi; Tozzi, Clésio Luis

    2009-07-01

    In this work, we present a method for predicting hot spot residues by using a set of structural and evolutionary parameters. Unlike previous studies, we use a set of parameters which do not depend on the structure of the protein in complex, so that the predictor can also be used when the interface region is unknown. Despite the fact that no information concerning proteins in complex is used for prediction, the application of the method to a compiled dataset described in the literature achieved a performance of 60.4%, as measured by F-Measure, corresponding to a recall of 78.1% and a precision of 49.5%. This result is higher than those reported by previous studies using the same data set.

  18. Relationship between Defect Size and Fatigue Life Distributions in Al-7 Pct Si-Mg Alloy Castings

    NASA Astrophysics Data System (ADS)

    Tiryakioğlu, Murat

    2009-07-01

    A new method for predicting the variability in fatigue life of castings was developed by combining the size distribution for the fatigue-initiating defects and a fatigue life model based on the Paris-Erdoğan law for crack propagation. Two datasets for the fatigue-initiating defects in Al-7 pct Si-Mg alloy castings, reported previously in the literature, were used to demonstrate that (1) the size of fatigue-initiating defects follow the Gumbel distribution; (2) the crack propagation model developed previously provides respectable fits to experimental data; and (3) the method developed in the present study expresses the variability in both datasets, almost as well as the lognormal distribution and better than the Weibull distribution.

  19. Classification of Uxo by Principal Dipole Polarizability

    NASA Astrophysics Data System (ADS)

    Kappler, K. N.

    2010-12-01

    Data acquired by multiple-Transmitter, multiple-receiver time-domain electromagnetic devices show great potential for determining the geometric and compositional information relating to near surface conductive targets. Here is presented an analysis of data from one such system; the Berkeley Unexploded-ordnance Discriminator (BUD) system. BUD data are succinctly reduced by processing the multi-static data matrices to obtain magnetic dipole polarizability matrices for data from each time gate. When viewed over all time gates, the projections of the data onto the principal polar axes yield so-called polarizability curves. These curves are especially well suited to discriminating between subsurface conductivity anomalies which correspond to objects of rotational symmetry and irregularly shaped objects. The curves have previously been successfully employed as library elements in a pattern recognition scheme aimed at discriminating harmless scrap metal from dangerous intact unexploded ordnance. However, previous polarizability-curve matching methods have only been applied at field sites which are known a priori to be contaminated by a single type of ordnance, and furthermore, the particular ordnance present in the subsurface was known to be large. Thus signal amplitude was a key element in the discrimination process. The work presented here applies feature-based pattern classification techniques to BUD field data where more than 20 categories of object are present. Data soundings from a calibration grid at the Yuma, AZ proving ground are used in a cross validation study to calibrate the pattern recognition method. The resultant method is then applied to a Blind Test Grid. Results indicate that when lone UXO are present and SNR is reasonably high, Polarizability Curve Matching successfully discriminates UXO from scrap metal when a broad range of objects are present.

  20. Weibull Modulus Estimated by the Non-linear Least Squares Method: A Solution to Deviation Occurring in Traditional Weibull Estimation

    NASA Astrophysics Data System (ADS)

    Li, T.; Griffiths, W. D.; Chen, J.

    2017-11-01

    The Maximum Likelihood method and the Linear Least Squares (LLS) method have been widely used to estimate Weibull parameters for reliability of brittle and metal materials. In the last 30 years, many researchers focused on the bias of Weibull modulus estimation, and some improvements have been achieved, especially in the case of the LLS method. However, there is a shortcoming in these methods for a specific type of data, where the lower tail deviates dramatically from the well-known linear fit in a classic LLS Weibull analysis. This deviation can be commonly found from the measured properties of materials, and previous applications of the LLS method on this kind of dataset present an unreliable linear regression. This deviation was previously thought to be due to physical flaws ( i.e., defects) contained in materials. However, this paper demonstrates that this deviation can also be caused by the linear transformation of the Weibull function, occurring in the traditional LLS method. Accordingly, it may not be appropriate to carry out a Weibull analysis according to the linearized Weibull function, and the Non-linear Least Squares method (Non-LS) is instead recommended for the Weibull modulus estimation of casting properties.

  1. Expediting Combinatorial Data Set Analysis by Combining Human and Algorithmic Analysis.

    PubMed

    Stein, Helge Sören; Jiao, Sally; Ludwig, Alfred

    2017-01-09

    A challenge in combinatorial materials science remains the efficient analysis of X-ray diffraction (XRD) data and its correlation to functional properties. Rapid identification of phase-regions and proper assignment of corresponding crystal structures is necessary to keep pace with the improved methods for synthesizing and characterizing materials libraries. Therefore, a new modular software called htAx (high-throughput analysis of X-ray and functional properties data) is presented that couples human intelligence tasks used for "ground-truth" phase-region identification with subsequent unbiased verification by an algorithm to efficiently analyze which phases are present in a materials library. Identified phases and phase-regions may then be correlated to functional properties in an expedited manner. For the functionality of htAx to be proven, two previously published XRD benchmark data sets of the materials systems Al-Cr-Fe-O and Ni-Ti-Cu are analyzed by htAx. The analysis of ∼1000 XRD patterns takes less than 1 day with htAx. The proposed method reliably identifies phase-region boundaries and robustly identifies multiphase structures. The method also addresses the problem of identifying regions with previously unpublished crystal structures using a special daisy ternary plot.

  2. Refinements to the method of epicentral location based on surface waves from ambient seismic noise: introducing Love waves

    USGS Publications Warehouse

    Levshin, Anatoli L.; Barmin, Mikhail P.; Moschetti, Morgan P.; Mendoza, Carlos; Ritzwoller, Michael H.

    2012-01-01

    The purpose of this study is to develop and test a modification to a previous method of regional seismic event location based on Empirical Green’s Functions (EGFs) produced from ambient seismic noise. Elastic EGFs between pairs of seismic stations are determined by cross-correlating long ambient noise time-series recorded at the two stations. The EGFs principally contain Rayleigh- and Love-wave energy on the vertical and transverse components, respectively, and we utilize these signals between about 5 and 12 s period. The previous method, based exclusively on Rayleigh waves, may yield biased epicentral locations for certain event types with hypocentral depths between 2 and 5 km. Here we present theoretical arguments that show how Love waves can be introduced to reduce or potentially eliminate the bias. We also present applications of Rayleigh- and Love-wave EGFs to locate 10 reference events in the western United States. The separate Rayleigh and Love epicentral locations and the joint locations using a combination of the two waves agree to within 1 km distance, on average, but confidence ellipses are smallest when both types of waves are used.

  3. Torso-Tank Validation of High-Resolution Electrogastrography (EGG): Forward Modelling, Methodology and Results.

    PubMed

    Calder, Stefan; O'Grady, Greg; Cheng, Leo K; Du, Peng

    2018-04-27

    Electrogastrography (EGG) is a non-invasive method for measuring gastric electrical activity. Recent simulation studies have attempted to extend the current clinical utility of the EGG, in particular by providing a theoretical framework for distinguishing specific gastric slow wave dysrhythmias. In this paper we implement an experimental setup called a 'torso-tank' with the aim of expanding and experimentally validating these previous simulations. The torso-tank was developed using an adult male torso phantom with 190 electrodes embedded throughout the torso. The gastric slow waves were reproduced using an artificial current source capable of producing 3D electrical fields. Multiple gastric dysrhythmias were reproduced based on high-resolution mapping data from cases of human gastric dysfunction (gastric re-entry, conduction blocks and ectopic pacemakers) in addition to normal test data. Each case was recorded and compared to the previously-presented simulated results. Qualitative and quantitative analyses were performed to define the accuracy showing [Formula: see text] 1.8% difference, [Formula: see text] 0.99 correlation, and [Formula: see text] 0.04 normalised RMS error between experimental and simulated findings. These results reaffirm previous findings and these methods in unison therefore present a promising morphological-based methodology for advancing the understanding and clinical applications of EGG.

  4. Multiple reaction monitoring assay based on conventional liquid chromatography and electrospray ionization for simultaneous monitoring of multiple cerebrospinal fluid biomarker candidates for Alzheimer's disease

    PubMed Central

    Choi1, Yong Seok; Lee, Kelvin H.

    2016-01-01

    Alzheimer's disease (AD) is the most common type of dementia, but early and accurate diagnosis remains challenging. Previously, a panel of cerebrospinal fluid (CSF) biomarker candidates distinguishing AD and non-AD CSF accurately (> 90%) was reported. Furthermore, a multiple reaction monitoring (MRM) assay based on nano liquid chromatography tandem mass spectrometry (nLC-MS/MS) was developed to help validate putative AD CSF biomarker candidates including proteins from the panel. Despite the good performance of the MRM assay, wide acceptance may be challenging because of limited availability of nLC-MS/MS systems laboratories. Thus, here, a new MRM assay based on conventional LC-MS/MS is presented. This method monitors 16 peptides representing 16 (of 23) biomarker candidates that belonged to the previous AD CSF panel. A 30-times more concentrated sample than the sample used for the previous study was loaded onto a high capacity trap column, and all 16 MRM transitions showed good linearity (average R2 = 0.966), intra-day reproducibility (average coefficient of variance (CV) = 4.78%), and inter-day reproducibility (average CV = 9.85%). The present method has several advantages such as a shorter analysis time, no possibility of target variability, and no need for an internal standard. PMID:26404792

  5. Comparing electrochemical performance of transition metal silicate cathodes and chevrel phase Mo6S8 in the analogous rechargeable Mg-ion battery system

    NASA Astrophysics Data System (ADS)

    Chen, Xinzhi; Bleken, Francesca L.; Løvvik, Ole Martin; Vullum-Bruer, Fride

    2016-07-01

    Polyanion based silicate materials, MgMSiO4 (M = Fe, Mn, Co), previously reported to be promising cathode materials for Mg-ion batteries, have been re-examined. Both the sol-gel and molten salt methods are employed to synthesize MgMSiO4 composites. Mo6S8 is synthesized by a molten salt method combined with Cu leaching and investigated in the equivalent electrochemical system as a bench mark. Electrochemical measurements for Mo6S8 performed using the 2nd generation electrolyte show similar results to those reported in literature. Electrochemical performance of the silicate materials on the other hand, do not show the promising results previously reported. A thorough study of these published results are presented here, and compared to the current experimental data on the same material system. It appears that there are certain inconsistencies in the published results which cannot be explained. To further corroborate the present experimental results, atomic-scale calculations from first principles are performed, demonstrating that diffusion barriers are very high for Mg diffusion in MgMSiO4. In conclusion, MgMSiO4 (M = Fe, Mn, Co) olivine materials do not seem to be such good candidates for cathode materials in Mg-ion batteries as previously reported.

  6. Multiple reaction monitoring assay based on conventional liquid chromatography and electrospray ionization for simultaneous monitoring of multiple cerebrospinal fluid biomarker candidates for Alzheimer's disease.

    PubMed

    Choi, Yong Seok; Lee, Kelvin H

    2016-03-01

    Alzheimer's disease (AD) is the most common type of dementia, but early and accurate diagnosis remains challenging. Previously, a panel of cerebrospinal fluid (CSF) biomarker candidates distinguishing AD and non-AD CSF accurately (>90 %) was reported. Furthermore, a multiple reaction monitoring (MRM) assay based on nano liquid chromatography tandem mass spectrometry (nLC-MS/MS) was developed to help validate putative AD CSF biomarker candidates including proteins from the panel. Despite the good performance of the MRM assay, wide acceptance may be challenging because of limited availability of nLC-MS/MS systems in laboratories. Thus, here, a new MRM assay based on conventional LC-MS/MS is presented. This method monitors 16 peptides representing 16 (of 23) biomarker candidates that belonged to the previous AD CSF panel. A 30-times more concentrated sample than the sample used for the previous study was loaded onto a high capacity trap column, and all 16 MRM transitions showed good linearity (average R(2) = 0.966), intra-day reproducibility (average coefficient of variance (CV) = 4.78 %), and inter-day reproducibility (average CV = 9.85 %). The present method has several advantages such as a shorter analysis time, no possibility of target variability, and no need for an internal standard.

  7. Gay-Lussac Experiment

    ERIC Educational Resources Information Center

    Ladino, L. A.; Rondón, S. H.

    2015-01-01

    In this paper, we present a low-cost method to study the Gay-Lussac's law. We use a heating wire wrapped around the test tube to heat the air inside and make use of a solid state pressure sensor which requires a previous calibration to measure the pressure in the test tube.

  8. Auditory Scene Analysis: An Attention Perspective

    ERIC Educational Resources Information Center

    Sussman, Elyse S.

    2017-01-01

    Purpose: This review article provides a new perspective on the role of attention in auditory scene analysis. Method: A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal…

  9. The Socio-Technical Design of a Library and Information Science Collaboratory

    ERIC Educational Resources Information Center

    Lassi, Monica; Sonnenwald, Diane H.

    2013-01-01

    Introduction: We present a prototype collaboratory, a socio-technical platform to support sharing research data collection instruments in library and information science. No previous collaboratory has attempted to facilitate sharing digital research data collection instruments among library and information science researchers. Method: We have…

  10. The application of moment methods to the analysis of fluid electrical conductivity logs in boreholes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loew, S.; Tsang, Chin-Fu; Hale, F.V.

    1990-08-01

    This report is one of a series documenting the results of the Nagra-DOE Cooperative (NDC-I) research program in which the cooperating scientists explore the geological, geophysical, hydrological, geochemical, and structural effects anticipated from the use of a rock mass as a geologic repository for nuclear waste. Previous reports have presented a procedure for analyzing a time sequence of wellbore electric conductivity logs in order to obtain outflow parameters of fractures intercepted by the borehole, and a code, called BORE, used to simulate borehole fluid conductivity profiles given these parameters. The present report describes three new direct (not iterative) methods formore » analyzing a short time series of electric conductivity logs based on moment quantities of the individual outflow peaks and applies them to synthetic as well as to field data. The results of the methods discussed show promising results and are discussed in terms of their respective advantages and limitations. In particular it is shown that one of these methods, the so-called Partial Moment Method,'' is capable of reproducing packer test results from field experiments in the Leuggern deep well within a factor of three, which is below the range of what is recognized as the precision of packer tests themselves. Furthermore the new method is much quicker than the previously used iterative fitting procedure and is even capable of handling transient fracture outflow conditions. 20 refs., 11 figs., 10 tabs.« less

  11. Detection of no-model input-output pairs in closed-loop systems.

    PubMed

    Potts, Alain Segundo; Alvarado, Christiam Segundo Morales; Garcia, Claudio

    2017-11-01

    The detection of no-model input-output (IO) pairs is important because it can speed up the multivariable system identification process, since all the pairs with null transfer functions are previously discarded and it can also improve the identified model quality, thus improving the performance of model based controllers. In the available literature, the methods focus just on the open-loop case, since in this case there is not the effect of the controller forcing the main diagonal in the transfer matrix to one and all the other terms to zero. In this paper, a modification of a previous method able to detect no-model IO pairs in open-loop systems is presented, but adapted to perform this duty in closed-loop systems. Tests are performed by using the traditional methods and the proposed one to show its effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Effective Diagnosis of Alzheimer's Disease by Means of Association Rules

    NASA Astrophysics Data System (ADS)

    Chaves, R.; Ramírez, J.; Górriz, J. M.; López, M.; Salas-Gonzalez, D.; Illán, I.; Segovia, F.; Padilla, P.

    In this paper we present a novel classification method of SPECT images for the early diagnosis of the Alzheimer's disease (AD). The proposed method is based on Association Rules (ARs) aiming to discover interesting associations between attributes contained in the database. The system uses firstly voxel-as-features (VAF) and Activation Estimation (AE) to find tridimensional activated brain regions of interest (ROIs) for each patient. These ROIs act as inputs to secondly mining ARs between activated blocks for controls, with a specified minimum support and minimum confidence. ARs are mined in supervised mode, using information previously extracted from the most discriminant rules for centering interest in the relevant brain areas, reducing the computational requirement of the system. Finally classification process is performed depending on the number of previously mined rules verified by each subject, yielding an up to 95.87% classification accuracy, thus outperforming recent developed methods for AD diagnosis.

  13. Correlation Filter Learning Toward Peak Strength for Visual Tracking.

    PubMed

    Sui, Yao; Wang, Guanghui; Zhang, Li

    2018-04-01

    This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This paper aims at solving this issue and proposes a novel algorithm to learn the correlation filter. The proposed approach, by imposing an elastic net constraint on the filter, can adaptively eliminate those distractive features in the correlation filtering. A new peak strength metric is proposed to measure the discriminative capability of the learned correlation filter. It is demonstrated that the proposed approach effectively strengthens the peak of the correlation response, leading to more discriminative performance than previous methods. Extensive experiments on a challenging visual tracking benchmark demonstrate that the proposed tracker outperforms most state-of-the-art methods.

  14. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    NASA Astrophysics Data System (ADS)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  15. Real-time defect detection on highly reflective curved surfaces

    NASA Astrophysics Data System (ADS)

    Rosati, G.; Boschetti, G.; Biondi, A.; Rossi, A.

    2009-03-01

    This paper presents an automated defect detection system for coated plastic components for the automotive industry. This research activity came up as an evolution of a previous study which employed a non-flat mirror to illuminate and inspect high reflective curved surfaces. According to this method, the rays emitted from a light source are conveyed on the surface under investigation by means of a suitably curved mirror. After the reflection on the surface, the light rays are collected by a CCD camera, in which the coating defects appear as shadows of various shapes and dimensions. In this paper we present an evolution of the above-mentioned method, introducing a simplified mirror set-up in order to reduce the costs and the complexity of the defect detection system. In fact, a set of plane mirrors is employed instead of the curved one. Moreover, the inspection of multiple bend radius parts is investigated. A prototype of the machine vision system has been developed in order to test this simplified method. This device is made up of a light projector, a set of plane mirrors for light rays reflection, a conveyor belt for handling components, a CCD camera and a desktop PC which performs image acquisition and processing. Like in the previous system, the defects are identified as shadows inside a high brightness image. At the end of the paper, first experimental results are presented.

  16. Rapid Presentation of Emotional Expressions Reveals New Emotional Impairments in Tourette’s Syndrome

    PubMed Central

    Mermillod, Martial; Devaux, Damien; Derost, Philippe; Rieu, Isabelle; Chambres, Patrick; Auxiette, Catherine; Legrand, Guillaume; Galland, Fabienne; Dalens, Hélène; Coulangeon, Louise Marie; Broussolle, Emmanuel; Durif, Franck; Jalenques, Isabelle

    2013-01-01

    Objective: Based on a variety of empirical evidence obtained within the theoretical framework of embodiment theory, we considered it likely that motor disorders in Tourette’s syndrome (TS) would have emotional consequences for TS patients. However, previous research using emotional facial categorization tasks suggests that these consequences are limited to TS patients with obsessive-compulsive behaviors (OCB). Method: These studies used long stimulus presentations which allowed the participants to categorize the different emotional facial expressions (EFEs) on the basis of a perceptual analysis that might potentially hide a lack of emotional feeling for certain emotions. In order to reduce this perceptual bias, we used a rapid visual presentation procedure. Results: Using this new experimental method, we revealed different and surprising impairments on several EFEs in TS patients compared to matched healthy control participants. Moreover, a spatial frequency analysis of the visual signal processed by the patients suggests that these impairments may be located at a cortical level. Conclusion: The current study indicates that the rapid visual presentation paradigm makes it possible to identify various potential emotional disorders that were not revealed by the standard visual presentation procedures previously reported in the literature. Moreover, the spatial frequency analysis performed in our study suggests that emotional deficit in TS might lie at the level of temporal cortical areas dedicated to the processing of HSF visual information. PMID:23630481

  17. Audio Watermark Embedding Technique Applying Auditory Stream Segregation: "G-encoder Mark" Able to Be Extracted by Mobile Phone

    NASA Astrophysics Data System (ADS)

    Modegi, Toshio

    We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.

  18. Determination of Carbonyl Functional Groups in Bio-oils by Potentiometric Titration: The Faix Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, Stuart; Ferrell, Jack R.

    We know that carbonyl compounds, present in bio-oils, are responsible for bio-oil property changes upon storage and during upgrading. Specifically, carbonyls cause an increase in viscosity (often referred to as 'aging') during storage of bio-oils. As such, carbonyl content has previously been used as a method of tracking bio-oil aging and condensation reactions with less variability than viscosity measurements. In addition, carbonyls are also responsible for coke formation in bio-oil upgrading processes. Given the importance of carbonyls in bio-oils, accurate analytical methods for their quantification are very important for the bio-oil community. Potentiometric titration methods based on carbonyl oximation havemore » long been used for the determination of carbonyl content in pyrolysis bio-oils. Here, we present a modification of the traditional carbonyl oximation procedures that results in less reaction time, smaller sample size, higher precision, and more accurate carbonyl determinations. And while traditional carbonyl oximation methods occur at room temperature, the Faix method presented here occurs at an elevated temperature of 80 degrees C.« less

  19. Determination of Carbonyl Functional Groups in Bio-oils by Potentiometric Titration: The Faix Method

    DOE PAGES

    Black, Stuart; Ferrell, Jack R.

    2017-02-07

    We know that carbonyl compounds, present in bio-oils, are responsible for bio-oil property changes upon storage and during upgrading. Specifically, carbonyls cause an increase in viscosity (often referred to as 'aging') during storage of bio-oils. As such, carbonyl content has previously been used as a method of tracking bio-oil aging and condensation reactions with less variability than viscosity measurements. In addition, carbonyls are also responsible for coke formation in bio-oil upgrading processes. Given the importance of carbonyls in bio-oils, accurate analytical methods for their quantification are very important for the bio-oil community. Potentiometric titration methods based on carbonyl oximation havemore » long been used for the determination of carbonyl content in pyrolysis bio-oils. Here, we present a modification of the traditional carbonyl oximation procedures that results in less reaction time, smaller sample size, higher precision, and more accurate carbonyl determinations. And while traditional carbonyl oximation methods occur at room temperature, the Faix method presented here occurs at an elevated temperature of 80 degrees C.« less

  20. Diagnostic accuracy of different caries risk assessment methods. A systematic review.

    PubMed

    Senneby, Anna; Mejàre, Ingegerd; Sahlin, Nils-Eric; Svensäter, Gunnel; Rohlin, Madeleine

    2015-12-01

    To evaluate the accuracy of different methods used to identify individuals with increased risk of developing dental coronal caries. Studies on following methods were included: previous caries experience, tests using microbiota, buffering capacity, salivary flow rate, oral hygiene, dietary habits and sociodemographic variables. QUADAS-2 was used to assess risk of bias. Sensitivity, specificity, predictive values, and likelihood ratios (LR) were calculated. Quality of evidence based on ≥3 studies of a method was rated according to GRADE. PubMed, Cochrane Library, Web of Science and reference lists of included publications were searched up to January 2015. From 5776 identified articles, 18 were included. Assessment of study quality identified methodological limitations concerning study design, test technology and reporting. No study presented low risk of bias in all domains. Three or more studies were found only for previous caries experience and salivary mutans streptococci and quality of evidence for these methods was low. Evidence regarding other methods was lacking. For previous caries experience, sensitivity ranged between 0.21 and 0.94 and specificity between 0.20 and 1. Tests using salivary mutans streptococci resulted in low sensitivity and high specificity. For children with primary teeth at baseline, pooled LR for a positive test was 3 for previous caries experience and 4 for salivary mutans streptococci, given a threshold ≥10(5) CFU/ml. Evidence on the validity of analysed methods used for caries risk assessment is limited. As methodological quality was low, there is a need to improve study design. Low validity for the analysed methods may lead to patients with increased risk not being identified, whereas some are falsely identified as being at risk. As caries risk assessment guides individualized decisions on interventions and intervals for patient recall, improved performance based on best evidence is greatly needed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Multiresolution Distance Volumes for Progressive Surface Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laney, D E; Bertram, M; Duchaineau, M A

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distancemore » volumes for surface compression and progressive reconstruction for complex high genus surfaces.« less

  2. An improved protocol for the preparation of total genomic DNA from isolates of yeast and mould using Whatman FTA filter papers.

    PubMed

    Borman, Andrew M; Fraser, Mark; Linton, Christopher J; Palmer, Michael D; Johnson, Elizabeth M

    2010-06-01

    Here, we present a significantly improved version of our previously published method for the extraction of fungal genomic DNA from pure cultures using Whatman FTA filter paper matrix technology. This modified protocol is extremely rapid, significantly more cost effective than our original method, and importantly, substantially reduces the problem of potential cross-contamination between sequential filters when employing FTA technology.

  3. Optical Sensors and Methods for Underwater 3D Reconstruction

    PubMed Central

    Massot-Campos, Miquel; Oliver-Codina, Gabriel

    2015-01-01

    This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389

  4. A new method for qualitative simulation of water resources systems: 2. Applications

    NASA Astrophysics Data System (ADS)

    Antunes, M. P.; Seixas, M. J.; Camara, A. S.; Pinheiro, M.

    1987-11-01

    SLIN (Simulação Linguistica) is a new method for qualitative dynamic simulation. As was presented previously (Camara et al., this issue), SLIN relies upon a categorical representation of variables which are manipulated by logical rules. Two applications to water resources systems are included to illustrate SLIN's potential usefulness: the environmental impact evaluation of a hydropower plant and the assessment of oil dispersion in the sea after a tanker wreck.

  5. Fiber Segment-Based Degradation Methods for a Finite Element-Informed Structural Brain Network

    DTIC Science & Technology

    2013-11-01

    Services , Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents...should be aware that notwithstanding any other provision of law , no person shall be subject to any penalty for failing to comply with a collection of...functional communication between brain regions. This report presents an expansion of our previous methods used to create a finite element–informed

  6. Local Discontinuous Galerkin (LDG) Method for Advection of Active Compositional Fields with Discontinuous Boundaries: Demonstration and Comparison with Other Methods in the Mantle Convection Code ASPECT

    NASA Astrophysics Data System (ADS)

    He, Y.; Billen, M. I.; Puckett, E. G.

    2015-12-01

    Flow in the Earth's mantle is driven by thermo-chemical convection in which the properties and geochemical signatures of rocks vary depending on their origin and composition. For example, tectonic plates are composed of compositionally-distinct layers of crust, residual lithosphere and fertile mantle, while in the lower-most mantle there are large compositionally distinct "piles" with thinner lenses of different material. Therefore, tracking of active or passive fields with distinct compositional, geochemical or rheologic properties is important for incorporating physical realism into mantle convection simulations, and for investigating the long term mixing properties of the mantle. The difficulty in numerically advecting fields arises because they are non-diffusive and have sharp boundaries, and therefore require different methods than usually used for temperature. Previous methods for tracking fields include the marker-chain, tracer particle, and field-correction (e.g., the Lenardic Filter) methods: each of these has different advantages or disadvantages, trading off computational speed with accuracy in tracking feature boundaries. Here we present a method for modeling active fields in mantle dynamics simulations using a new solver implemented in the deal.II package that underlies the ASPECT software. The new solver for the advection-diffusion equation uses a Local Discontinuous Galerkin (LDG) algorithm, which combines features of both finite element and finite volume methods, and is particularly suitable for problems with a dominant first-order term and discontinuities. Furthermore, we have applied a post-processing technique to insure that the solution satisfies a global maximum/minimum. One potential drawback for the LDG method is that the total number of degrees of freedom is larger than the finite element method. To demonstrate the capabilities of this new method we present results for two benchmarks used previously: a falling cube with distinct buoyancy and viscosity, and a Rayleigh-Taylor instability of a compositionally buoyant layer. To evaluate the trade-offs in computational speed and solution accuracy we present results for these same benchmarks using the two field tracking methods available in ASPECT: active tracer particles and the entropy viscosity method.

  7. A modified conjugate gradient coefficient with inexact line search for unconstrained optimization

    NASA Astrophysics Data System (ADS)

    Aini, Nurul; Rivaie, Mohd; Mamat, Mustafa

    2016-11-01

    Conjugate gradient (CG) method is a line search algorithm mostly known for its wide application in solving unconstrained optimization problems. Its low memory requirements and global convergence properties makes it one of the most preferred method in real life application such as in engineering and business. In this paper, we present a new CG method based on AMR* and CD method for solving unconstrained optimization functions. The resulting algorithm is proven to have both the sufficient descent and global convergence properties under inexact line search. Numerical tests are conducted to assess the effectiveness of the new method in comparison to some previous CG methods. The results obtained indicate that our method is indeed superior.

  8. Salpingoscopy: systematic use in diagnostic laparoscopy.

    PubMed

    Marconi, G; Auge, L; Sojo, E; Young, E; Quintana, R

    1992-04-01

    To evaluate the importance of salpingoscopy together with laparoscopy in the diagnosis of tubal pathology. Salpingoscopy was performed as a complementary method in patients who were subjected to diagnostic laparoscopy. The relationship between the salpingoscopy and (1) the patient's previous history of tubal disease and (2) laparoscopic diagnoses was evaluated. Private patients referred to the Instituto de Fertilidad, Buenos Aires. Forty-two patients undergoing a diagnostic laparoscopy during the evaluation of their fertility or as a follow-up of previous therapy. Salpingoscopy was performed, using a colpomicrohysteroscope. We evaluated alterations in major and minor folds and their vascularization, the presence of microadhesions, and cellular nuclei dyed with methylene blue in the tubal lumen. Fifty percent of the patients who had no previous history of tubal disease presented with endosalpingeal alterations, and in 37% of the normal laparoscopies the salpinx had unilateral or bilateral salpingoscopic abnormalities. Salpingoscopy is a useful method to evaluate oviducts, before assuming their normality, and consideration of these women for assisted reproductive technology.

  9. Quota methods for congressional apportionment are still non-unique

    PubMed Central

    Mayberry, John P.

    1978-01-01

    Balinski and Young described a “quota method” for congressional apportionment and recommended it as “the only method satisfying three essential axioms” [Balinski, M. L. & Young, H. P. (1974) Proc. Natl. Acad. Sci. USA 71, 4602-4606]. This paper points out and repairs a slight defect in one of those axioms, producing a quota method slightly different from that described previously. It also presents an alternative to the “consistency” axiom of the paper and describes the “dual quota” method, uniquely satisfying the alternative axioms (which have exactly as much justification as the originals). PMID:16592547

  10. Finger vein recognition using local line binary pattern.

    PubMed

    Rosdi, Bakhtiar Affendi; Shing, Chai Wuh; Suandi, Shahrel Azmin

    2011-01-01

    In this paper, a personal verification method using finger vein is presented. Finger vein can be considered more secured compared to other hands based biometric traits such as fingerprint and palm print because the features are inside the human body. In the proposed method, a new texture descriptor called local line binary pattern (LLBP) is utilized as feature extraction technique. The neighbourhood shape in LLBP is a straight line, unlike in local binary pattern (LBP) which is a square shape. Experimental results show that the proposed method using LLBP has better performance than the previous methods using LBP and local derivative pattern (LDP).

  11. Development of a time-dependent incompressible Navier-Stokes solver based on a fractional-step method

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Moshe

    1990-01-01

    The development, validation and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems are discussed. A solution method that combines a finite-volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries was previously developed for fixed-grids. In the present research effort, this solution method is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.

  12. LiveDescribe: Can Amateur Describers Create High-Quality Audio Description?

    ERIC Educational Resources Information Center

    Branje, Carmen J.; Fels, Deborah I.

    2012-01-01

    Introduction: The study presented here evaluated the usability of the audio description software LiveDescribe and explored the acceptance rates of audio description created by amateur describers who used LiveDescribe to facilitate the creation of their descriptions. Methods: Twelve amateur describers with little or no previous experience with…

  13. Relationships between Self-Concept and Resilience Profiles in Young People with Disabilities

    ERIC Educational Resources Information Center

    Suriá Martínez, Raquel

    2016-01-01

    Introduction: The present study aims to identify different profiles in self-concept and resilience. In addition, statistically significant differences in self-concept domains among the profiles previously identified are analyzed. Method: The AF5 Self-Concept Questionnaire ("Cuestionario de Autoconcepto AF5") and the Resilience Scale were…

  14. Evaluation of Contextual Variability in Prediction of Reinforcer Effectiveness

    ERIC Educational Resources Information Center

    Pino, Olimpia; Dazzi, Carla

    2005-01-01

    Previous research has shown that stimulus preference assessments based on caregiver-opinion did not coincide with results of a more systematic method of assessing reinforcing value unless stimuli that were assessed to represent preferences were also preferred on paired stimulus presentation format, and that the relative preference based on the…

  15. Laboratory Investigations Of Mechanisms For 1,4-Dioxane Destruction By Ozone In Water (Presentation)

    EPA Science Inventory

    Advances in analytical detection methods have made it possible to quantify 1,4-dioxane contamination in groundwater, even a well-characterized sites where it had not been previously detected. Although 1,4-dioxane is difficult to treat because of its chemical and physical propert...

  16. Emotion Regulation Predicts Attention Bias in Maltreated Children At-Risk for Depression

    ERIC Educational Resources Information Center

    Romens, Sarah E.; Pollak, Seth D.

    2012-01-01

    Background: Child maltreatment is associated with heightened risk for depression; however, not all individuals who experience maltreatment develop depression. Previous research indicates that maltreatment contributes to an attention bias for emotional cues, and that depressed individuals show attention bias for sad cues. Method: The present study…

  17. Improving Children's Working Memory and Classroom Performance

    ERIC Educational Resources Information Center

    St Clair-Thompson, Helen; Stevens, Ruth; Hunt, Alexandra; Bolder, Emma

    2010-01-01

    Previous research has demonstrated close relationships between working memory and children's scholastic attainment. The aim of the present study was to explore a method of improving working memory, using memory strategy training. Two hundred and fifty-four children aged five to eight years were tested on measures of the phonological loop,…

  18. Investigating Storage and Retrieval Processes of Directed Forgetting: A Model-Based Approach

    ERIC Educational Resources Information Center

    Rummel, Jan; Marevic, Ivan; Kuhlmann, Beatrice G.

    2016-01-01

    Intentional forgetting of previously learned information is an adaptive cognitive capability of humans but its cognitive underpinnings are not yet well understood. It has been argued that it strongly depends on the presentation method whether forgetting instructions alter storage or retrieval stages (Basden, Basden, & Gargano, 1993). In…

  19. Sensitivities of Soap Solutions in Leak Detection

    NASA Technical Reports Server (NTRS)

    Stuck, D.; Lam, D. Q.; Daniels, C.

    1985-01-01

    Document describes method for determining minimum leak rate to which soap-solution leak detectors sensitive. Bubbles formed at smaller leak rates than previously assumed. In addition to presenting test results, document discusses effects of joint-flange configurations, properties of soap solutions, and correlation of test results with earlier data.

  20. The Contribution of Mediator-Based Deficiencies to Age Differences in Associative Learning

    ERIC Educational Resources Information Center

    Dunlosky, John; Hertzog, Christopher; Powell-Moman, Amy

    2005-01-01

    Production, mediational, and utilization deficiencies, which describe how strategy use may contribute to developmental trends in episodic memory, have been intensively investigated. Using a mediator report-and-retrieval method, the authors present evidence concerning the degree to which 2 previously unexplored mediator-based deficits--retrieval…

  1. B and V photometry and analysis of the eclipsing binary RZ CAS

    NASA Astrophysics Data System (ADS)

    Riazi, N.; Bagheri, M. R.; Faghihi, F.

    1994-01-01

    Photoelectric light curves of the eclipsing binary RZ Cas are presented for B and V filters. The light curves are analyzed for light and geometrical elements, starting with a previously suggested preliminary method. The approximate results thus obtained are then optimised through the Wilson-Devinney computer programs.

  2. An Evaluation of a Stimulus Preference Assessment of Auditory Stimuli for Adolescents with Developmental Disabilities

    ERIC Educational Resources Information Center

    Horrocks, Erin; Higbee, Thomas S.

    2008-01-01

    Previous researchers have used stimulus preference assessment (SPA) methods to identify salient reinforcers for individuals with developmental disabilities including tangible, leisure, edible and olfactory stimuli. In the present study, SPA procedures were used to identify potential auditory reinforcers and determine the reinforcement value of…

  3. The Psychophysics of Contingency Assessment

    ERIC Educational Resources Information Center

    Allan, Lorraine G.; Hannah, Samuel D.; Crump, Matthew J. C.; Siegel, Shepard

    2008-01-01

    The authors previously described a procedure that permits rapid, multiple within-participant evaluations of contingency assessment (the "streamed-trial" procedure, M. J. C. Crump, S. D. Hannah, L. G. Allan, & L. K. Hord, 2007). In the present experiments, they used the streamed-trial procedure, combined with the method of constant stimuli and a…

  4. Full velocity difference model for a car-following theory.

    PubMed

    Jiang, R; Wu, Q; Zhu, Z

    2001-07-01

    In this paper, we present a full velocity difference model for a car-following theory based on the previous models in the literature. To our knowledge, the model is an improvement over the previous ones theoretically, because it considers more aspects in car-following process than others. This point is verified by numerical simulation. Then we investigate the property of the model using both analytic and numerical methods, and find that the model can describe the phase transition of traffic flow and estimate the evolution of traffic congestion.

  5. Feedback systems for nontraditional medicines: a case for the signal flow diagram.

    PubMed

    Tice, B S

    1998-11-01

    The signal flow diagram is a graphic method used to represent complex data that is found in the field of biology and hence the field of medicine. The signal flow diagram is analyzed against a table of data and a flow chart of data and evaluated on the clarity and simplicity of imparting this information. The data modeled is from previous clinical studies and nontraditional medicine from Africa, China, and South America. This report is a development from previous presentations of the signal flow diagram.1-4

  6. Unimolecular Reaction Pathways of a γ-Ketohydroperoxide from Combined Application of Automated Reaction Discovery Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grambow, Colin A.; Jamal, Adeel; Li, Yi -Pei

    Ketohydroperoxides are important in liquid-phase autoxidation and in gas-phase partial oxidation and pre-ignition chemistry, but because of their low concentration, instability, and various analytical chemistry limitations, it has been challenging to experimentally determine their reactivity, and only a few pathways are known. In the present work, 75 elementary-step unimolecular reactions of the simplest γ-ketohydroperoxide, 3-hydroperoxypropanal, were discovered by a combination of density functional theory with several automated transition-state search algorithms: the Berny algorithm coupled with the freezing string method, single- and double-ended growing string methods, the heuristic KinBot algorithm, and the single-component artificial force induced reaction method (SC-AFIR). The presentmore » joint approach significantly outperforms previous manual and automated transition-state searches – 68 of the reactions of γ-ketohydroperoxide discovered here were previously unknown and completely unexpected. All of the methods found the lowest-energy transition state, which corresponds to the first step of the Korcek mechanism, but each algorithm except for SC-AFIR detected several reactions not found by any of the other methods. We show that the low-barrier chemical reactions involve promising new chemistry that may be relevant in atmospheric and combustion systems. Our study highlights the complexity of chemical space exploration and the advantage of combined application of several approaches. Altogether, the present work demonstrates both the power and the weaknesses of existing fully automated approaches for reaction discovery which suggest possible directions for further method development and assessment in order to enable reliable discovery of all important reactions of any specified reactant(s).« less

  7. Unimolecular Reaction Pathways of a γ-Ketohydroperoxide from Combined Application of Automated Reaction Discovery Methods

    DOE PAGES

    Grambow, Colin A.; Jamal, Adeel; Li, Yi -Pei; ...

    2017-12-22

    Ketohydroperoxides are important in liquid-phase autoxidation and in gas-phase partial oxidation and pre-ignition chemistry, but because of their low concentration, instability, and various analytical chemistry limitations, it has been challenging to experimentally determine their reactivity, and only a few pathways are known. In the present work, 75 elementary-step unimolecular reactions of the simplest γ-ketohydroperoxide, 3-hydroperoxypropanal, were discovered by a combination of density functional theory with several automated transition-state search algorithms: the Berny algorithm coupled with the freezing string method, single- and double-ended growing string methods, the heuristic KinBot algorithm, and the single-component artificial force induced reaction method (SC-AFIR). The presentmore » joint approach significantly outperforms previous manual and automated transition-state searches – 68 of the reactions of γ-ketohydroperoxide discovered here were previously unknown and completely unexpected. All of the methods found the lowest-energy transition state, which corresponds to the first step of the Korcek mechanism, but each algorithm except for SC-AFIR detected several reactions not found by any of the other methods. We show that the low-barrier chemical reactions involve promising new chemistry that may be relevant in atmospheric and combustion systems. Our study highlights the complexity of chemical space exploration and the advantage of combined application of several approaches. Altogether, the present work demonstrates both the power and the weaknesses of existing fully automated approaches for reaction discovery which suggest possible directions for further method development and assessment in order to enable reliable discovery of all important reactions of any specified reactant(s).« less

  8. Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming.

    PubMed

    Chiu, Stephanie J; Toth, Cynthia A; Bowes Rickman, Catherine; Izatt, Joseph A; Farsiu, Sina

    2012-05-01

    This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique.

  9. Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming

    PubMed Central

    Chiu, Stephanie J.; Toth, Cynthia A.; Bowes Rickman, Catherine; Izatt, Joseph A.; Farsiu, Sina

    2012-01-01

    This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique. PMID:22567602

  10. A new fast and fully automated software based algorithm for extracting respiratory signal from raw PET data and its comparison to other methods.

    PubMed

    Kesner, Adam Leon; Kuntner, Claudia

    2010-10-01

    Respiratory gating in PET is an approach used to minimize the negative effects of respiratory motion on spatial resolution. It is based on an initial determination of a patient's respiratory movements during a scan, typically using hardware based systems. In recent years, several fully automated databased algorithms have been presented for extracting a respiratory signal directly from PET data, providing a very practical strategy for implementing gating in the clinic. In this work, a new method is presented for extracting a respiratory signal from raw PET sinogram data and compared to previously presented automated techniques. The acquisition of respiratory signal from PET data in the newly proposed method is based on rebinning the sinogram data into smaller data structures and then analyzing the time activity behavior in the elements of these structures. From this analysis, a 1D respiratory trace is produced, analogous to a hardware derived respiratory trace. To assess the accuracy of this fully automated method, respiratory signal was extracted from a collection of 22 clinical FDG-PET scans using this method, and compared to signal derived from several other software based methods as well as a signal derived from a hardware system. The method presented required approximately 9 min of processing time for each 10 min scan (using a single 2.67 GHz processor), which in theory can be accomplished while the scan is being acquired and therefore allowing a real-time respiratory signal acquisition. Using the mean correlation between the software based and hardware based respiratory traces, the optimal parameters were determined for the presented algorithm. The mean/median/range of correlations for the set of scans when using the optimal parameters was found to be 0.58/0.68/0.07-0.86. The speed of this method was within the range of real-time while the accuracy surpassed the most accurate of the previously presented algorithms. PET data inherently contains information about patient motion; information that is not currently being utilized. We have shown that a respiratory signal can be extracted from raw PET data in potentially real-time and in a fully automated manner. This signal correlates well with hardware based signal for a large percentage of scans, and avoids the efforts and complications associated with hardware. The proposed method to extract a respiratory signal can be implemented on existing scanners and, if properly integrated, can be applied without changes to routine clinical procedures.

  11. Passive wireless strain monitoring of tyres using capacitance and tuning frequency changes

    NASA Astrophysics Data System (ADS)

    Matsuzaki, Ryosuke; Todoroki, Akira

    2005-08-01

    In-service strain monitoring of tyres of automobiles is quite effective for improving the reliability of tyres and anti-lock braking systems (ABS). Conventional strain gauges have high stiffness and require lead wires. Therefore, they are cumbersome for tyre strain measurements. In a previous study, the authors proposed a new wireless strain monitoring method that adopts the tyre itself as a sensor, with an oscillating circuit. This method is very simple and useful, but it requires a battery to activate the oscillating circuit. In the present study, the previous method for wireless tyre monitoring is improved to produce a passive wireless sensor. A specimen made from a commercially available tyre is connected to a tuning circuit comprising an inductance and a capacitance as a condenser. The capacitance change of the tyre alters the tuning frequency. This change of the tuned radio wave facilitates wireless measurement of the applied strain of the specimen without any power supply. This passive wireless method is applied to a specimen and the static applied strain is measured. Experiments demonstrate that the method is effective for passive wireless strain monitoring of tyres.

  12. The method of space-time and conservation element and solution element: A new approach for solving the Navier-Stokes and Euler equations

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    1995-01-01

    A new numerical framework for solving conservation laws is being developed. This new framework differs substantially in both concept and methodology from the well-established methods, i.e., finite difference, finite volume, finite element, and spectral methods. It is conceptually simple and designed to overcome several key limitations of the above traditional methods. A two-level scheme for solving the convection-diffusion equation is constructed and used to illuminate the major differences between the present method and those previously mentioned. This explicit scheme, referred to as the a-mu scheme, has two independent marching variables.

  13. Reconstructed imaging of acoustic cloak using time-lapse reversal method

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Cheng, Ying; Xu, Jian-yi; Li, Bo; Liu, Xiao-jun

    2014-08-01

    We proposed and investigated a solution to the inverse acoustic cloak problem, an anti-stealth technology to make cloaks visible, using the time-lapse reversal (TLR) method. The TLR method reconstructs the image of an unknown acoustic cloak by utilizing scattered acoustic waves. Compared to previous anti-stealth methods, the TLR method can determine not only the existence of a cloak but also its exact geometric information like definite shape, size, and position. Here, we present the process for TLR reconstruction based on time reversal invariance. This technology may have potential applications in detecting various types of cloaks with different geometric parameters.

  14. An efficient method for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.

  15. Granulomatous mastitis: changing clinical and imaging features with image-guided biopsy correlation.

    PubMed

    Handa, Priyanka; Leibman, A Jill; Sun, Derek; Abadi, Maria; Goldberg, Aryeh

    2014-10-01

    To review clinical presentation, revisit patient demographics and imaging findings in granulomatous mastitis and determine the optimal biopsy method for diagnosis. A retrospective study was performed to review the clinical presentation, imaging findings and biopsy methods in patients with granulomatous mastitis. Twenty-seven patients with pathology-proven granulomatous mastitis were included. The average age at presentation was 38.0 years (range, 21-73 years). Seven patients were between 48 and 73 years old. Twenty-four patients presented with symptoms and three patients were asymptomatic. Nineteen patients were imaged with mammography demonstrating mammographically occult lesions as the predominant finding. Twenty-six patients were imaged with ultrasound and the most common finding was a mass lesion. Pathological diagnosis was made by image-guided biopsy in 44 % of patients. The imaging features of granulomatous mastitis on mammography are infrequently described. Our study demonstrates that granulomatous mastitis can occur in postmenopausal or asymptomatic patients, although previously reported exclusively in young women with palpable findings. Presentation on mammography as calcifications requiring mammographically guided vacuum-assisted biopsy has not been previously described. The diagnosis of granulomatous mastitis can easily be made by image-guided biopsy and surgical excision should be reserved for definitive treatment. • Characterizes radiographic appearance of granulomatous mastitis in postmenopausal or asymptomatic patients. • Granulomatous mastitis can present exclusively as calcifications on mammography. • The diagnosis of granulomatous mastitis is made by image-guided biopsy techniques.

  16. A fast numerical solution of scattering by a cylinder: Spectral method for the boundary integral equations

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.

  17. A simple and efficient method for deriving neurospheres from bone marrow stromal cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Qin; Mu Jun; Li Qi

    2008-08-08

    Bone marrow stromal cells (MSCs) can be differentiated into neuronal and glial-like cell types under appropriate experimental conditions. However, previously reported methods are complicated and involve the use of toxic reagents. Here, we present a simplified and nontoxic method for efficient conversion of rat MSCs into neurospheres that express the neuroectodermal marker nestin. These neurospheres can proliferate and differentiate into neuron, astrocyte, and oligodendrocyte phenotypes. We thus propose that MSCs are an emerging model cell for the treatment of a variety of neurological diseases.

  18. Generally astigmatic Gaussian beam representation and optimization using skew rays

    NASA Astrophysics Data System (ADS)

    Colbourne, Paul D.

    2014-12-01

    Methods are presented of using skew rays to optimize a generally astigmatic optical system to obtain the desired Gaussian beam focus and minimize aberrations, and to calculate the propagating generally astigmatic Gaussian beam parameters at any point. The optimization method requires very little computation beyond that of a conventional ray optimization, and requires no explicit calculation of the properties of the propagating Gaussian beam. Unlike previous methods, the calculation of beam parameters does not require matrix calculations or the introduction of non-physical concepts such as imaginary rays.

  19. The holistic analysis of gamma-ray spectra in instrumental neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Blaauw, Menno

    1994-12-01

    A method for the interpretation of γ-ray spectra as obtained in INAA using linear least squares techniques is described. Results obtained using this technique and the traditional method previously in use at IRI are compared. It is concluded that the method presented performs better with respect to the number of detected elements, the resolution of interferences and the estimation of the accuracies of the reported element concentrations. It is also concluded that the technique is robust enough to obviate the deconvolution of multiplets.

  20. Surface entropy of liquids via a direct Monte Carlo approach - Application to liquid Si

    NASA Technical Reports Server (NTRS)

    Wang, Z. Q.; Stroud, D.

    1990-01-01

    Two methods are presented for a direct Monte Carlo evaluation of the surface entropy S(s) of a liquid interacting by specified, volume-independent potentials. The first method is based on an application of the approach of Ferrenberg and Swendsen (1988, 1989) to Monte Carlo simulations at two different temperatures; it gives much more reliable results for S(s) in liquid Si than previous calculations based on numerical differentiation. The second method expresses the surface entropy directly as a canonical average at fixed temperature.

  1. A three-dimensional parabolic equation model of sound propagation using higher-order operator splitting and Padé approximants.

    PubMed

    Lin, Ying-Tsong; Collis, Jon M; Duda, Timothy F

    2012-11-01

    An alternating direction implicit (ADI) three-dimensional fluid parabolic equation solution method with enhanced accuracy is presented. The method uses a square-root Helmholtz operator splitting algorithm that retains cross-multiplied operator terms that have been previously neglected. With these higher-order cross terms, the valid angular range of the parabolic equation solution is improved. The method is tested for accuracy against an image solution in an idealized wedge problem. Computational efficiency improvements resulting from the ADI discretization are also discussed.

  2. Evaluation of hierarchical agglomerative cluster analysis methods for discrimination of primary biological aerosol

    NASA Astrophysics Data System (ADS)

    Crawford, I.; Ruske, S.; Topping, D. O.; Gallagher, M. W.

    2015-07-01

    In this paper we present improved methods for discriminating and quantifying Primary Biological Aerosol Particles (PBAP) by applying hierarchical agglomerative cluster analysis to multi-parameter ultra violet-light induced fluorescence (UV-LIF) spectrometer data. The methods employed in this study can be applied to data sets in excess of 1×106 points on a desktop computer, allowing for each fluorescent particle in a dataset to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient dataset. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4) where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best performing methods were applied to the BEACHON-RoMBAS ambient dataset where it was found that the z-score and range normalisation methods yield similar results with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP) where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misatrribution due to poor centroid definition and failure to assign particles to a cluster as a result of the subsampling and comparative attribution method employed by WASP. The methods used here allow for the entire fluorescent population of particles to be analysed yielding an explict cluster attribution for each particle, improving cluster centroid definition and our capacity to discriminate and quantify PBAP meta-classes compared to previous approaches.

  3. Analytical method for establishing indentation rolling resistance

    NASA Astrophysics Data System (ADS)

    Gładysiewicz, Lech; Konieczna, Martyna

    2018-01-01

    Belt conveyors are highly reliable machines able to work in special operating conditions. Harsh environment, long distance of transporting and great mass of transported martials are cause of high energy usage. That is why research in the field of belt conveyor transportation nowadays focuses on reducing the power consumption without lowering their efficiency. In this paper, previous methods for testing rolling resistance are described, and new method designed by authors was presented. New method of testing rolling resistance is quite simple and inexpensive. Moreover it allows to conduct the experimental tests of the impact of different parameters on the value of indentation rolling resistance such as core design, cover thickness, ambient temperature, idler travel frequency, or load value as well. Finally results of tests of relationship between rolling resistance and idler travel frequency and between rolling resistance and idler travel speed was presented.

  4. A discussion on the origin of quantum probabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holik, Federico, E-mail: olentiev2@gmail.com; Departamento de Matemática - Ciclo Básico Común, Universidad de Buenos Aires - Pabellón III, Ciudad Universitaria, Buenos Aires; Sáenz, Manuel

    We study the origin of quantum probabilities as arising from non-Boolean propositional-operational structures. We apply the method developed by Cox to non distributive lattices and develop an alternative formulation of non-Kolmogorovian probability measures for quantum mechanics. By generalizing the method presented in previous works, we outline a general framework for the deduction of probabilities in general propositional structures represented by lattices (including the non-distributive case). -- Highlights: •Several recent works use a derivation similar to that of R.T. Cox to obtain quantum probabilities. •We apply Cox’s method to the lattice of subspaces of the Hilbert space. •We obtain a derivationmore » of quantum probabilities which includes mixed states. •The method presented in this work is susceptible to generalization. •It includes quantum mechanics and classical mechanics as particular cases.« less

  5. Determination of Carbonyl Groups in Pyrolysis Bio-oils Using Potentiometric Titration: Review and Comparison of Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, Stuart; Ferrell, Jack R.

    Carbonyl compounds present in bio-oils are known to be responsible for bio-oil property changes upon storage and during upgrading. As such, carbonyl content has previously been used as a method of tracking bio-oil aging and condensation reactions with less variability than viscosity measurements. Given the importance of carbonyls in bio-oils, accurate analytical methods for their quantification are very important for the bio-oil community. Potentiometric titration methods based on carbonyl oximation have long been used for the determination of carbonyl content in pyrolysis bio-oils. Here in this study, we present a modification of the traditional carbonyl oximation procedures that results inmore » less reaction time, smaller sample size, higher precision, and more accurate carbonyl determinations. Some compounds such as carbohydrates are not measured by the traditional method (modified Nicolaides method), resulting in low estimations of the carbonyl content. Furthermore, we have shown that reaction completion for the traditional method can take up to 300 hours. The new method presented here (the modified Faix method) reduces the reaction time to 2 hours, uses triethanolamine (TEA) in the place of pyridine, and requires a smaller sample size for the analysis. Carbonyl contents determined using this new method are consistently higher than when using the traditional titration methods.« less

  6. Determination of Carbonyl Groups in Pyrolysis Bio-oils Using Potentiometric Titration: Review and Comparison of Methods

    DOE PAGES

    Black, Stuart; Ferrell, Jack R.

    2016-01-06

    Carbonyl compounds present in bio-oils are known to be responsible for bio-oil property changes upon storage and during upgrading. As such, carbonyl content has previously been used as a method of tracking bio-oil aging and condensation reactions with less variability than viscosity measurements. Given the importance of carbonyls in bio-oils, accurate analytical methods for their quantification are very important for the bio-oil community. Potentiometric titration methods based on carbonyl oximation have long been used for the determination of carbonyl content in pyrolysis bio-oils. Here in this study, we present a modification of the traditional carbonyl oximation procedures that results inmore » less reaction time, smaller sample size, higher precision, and more accurate carbonyl determinations. Some compounds such as carbohydrates are not measured by the traditional method (modified Nicolaides method), resulting in low estimations of the carbonyl content. Furthermore, we have shown that reaction completion for the traditional method can take up to 300 hours. The new method presented here (the modified Faix method) reduces the reaction time to 2 hours, uses triethanolamine (TEA) in the place of pyridine, and requires a smaller sample size for the analysis. Carbonyl contents determined using this new method are consistently higher than when using the traditional titration methods.« less

  7. Radar Sensing for Intelligent Vehicles in Urban Environments

    PubMed Central

    Reina, Giulio; Johnson, David; Underwood, James

    2015-01-01

    Radar overcomes the shortcomings of laser, stereovision, and sonar because it can operate successfully in dusty, foggy, blizzard-blinding, and poorly lit scenarios. This paper presents a novel method for ground and obstacle segmentation based on radar sensing. The algorithm operates directly in the sensor frame, without the need for a separate synchronised navigation source, calibration parameters describing the location of the radar in the vehicle frame, or the geometric restrictions made in the previous main method in the field. Experimental results are presented in various urban scenarios to validate this approach, showing its potential applicability for advanced driving assistance systems and autonomous vehicle operations. PMID:26102493

  8. Radar Sensing for Intelligent Vehicles in Urban Environments.

    PubMed

    Reina, Giulio; Johnson, David; Underwood, James

    2015-06-19

    Radar overcomes the shortcomings of laser, stereovision, and sonar because it can operate successfully in dusty, foggy, blizzard-blinding, and poorly lit scenarios. This paper presents a novel method for ground and obstacle segmentation based on radar sensing. The algorithm operates directly in the sensor frame, without the need for a separate synchronised navigation source, calibration parameters describing the location of the radar in the vehicle frame, or the geometric restrictions made in the previous main method in the field. Experimental results are presented in various urban scenarios to validate this approach, showing its potential applicability for advanced driving assistance systems and autonomous vehicle operations.

  9. Method for fluorination of actinide fluorides and oxyfluorides using O/sub 2/F/sub 2/

    DOEpatents

    Eller, P.G.; Malm, J.G.; Penneman, R.A.

    1984-08-01

    The present invention relates generally to methods of fluorination and more particularly to the use of O/sub 2/F/sub 2/ for the preparation of actinide hexafluorides, and for the extraction of deposited actinides and fluorides and oxyfluorides thereof from reaction vessels. The experiments set forth hereinabove demonstrate that the room temperature or below use of O/sub 2/F/sub 2/ will be highly beneficial for the preparation of pure actinide hexafluorides from their respective tetrafluorides without traces of HF being present as occurs using other fluorinating agents: and decontamination of equipment previously exposed to actinides: e.g., walls, feed lines, etc.

  10. An automated water iodinating subsystem for manned space flight

    NASA Technical Reports Server (NTRS)

    Houck, O. K.; Wynveen, R. A.

    1974-01-01

    Controlling microbial growth by injecting iodine (l2) into water supplies is a widely acceptable technique, but requires a specialized injection method for space flight. An electrochemical l2 injection method and l2 level monitor are discussed in this paper, which also describe iodination practices previously used in the manned space program and major l2 biocidal characteristics. The development and design of the injector and monitor are described, and results of subsequent experiments are presented. Also presented are expected vehicle penalties for utilizing the l2 injector in certain space missions, especially the Space Shuttle, and possible injector failure modes and their criticality.

  11. The costs of nurse turnover: part 1: an economic perspective.

    PubMed

    Jones, Cheryl Bland

    2004-12-01

    Nurse turnover is costly for healthcare organizations. Administrators and nurse executives need a reliable estimate of nurse turnover costs and the origins of those costs if they are to develop effective measures of reducing nurse turnover and its costs. However, determining how to best capture and quantify nurse turnover costs can be challenging. Part 1 of this series conceptualizes nurse turnover via human capital theory and presents an update of a previously developed method for determining the costs of nurse turnover, the Nursing Turnover Cost Calculation Method. Part 2 (January 2005) presents a recent application of the methodology in an acute care hospital.

  12. A flammability study of thin plastic film materials

    NASA Technical Reports Server (NTRS)

    Skinner, S. Ballou

    1990-01-01

    The Materials Science Laboratory at the Kennedy Space Center presently conducts flammability tests on thin plastic film materials by using a small needle rake method. Flammability data from twenty-two thin plastic film materials were obtained and cross-checked by using three different testing methods: (1) the presently used small needle rake; (2) the newly developed large needle rake; and (3) the previously used frame. In order to better discern the melting-burning phenomenon of thin plastic film material, five additional specific experiments were performed. These experiments determined the following: (1) the heat sink effect of each testing method; (2) the effect of the burn angle on the burn length or melting/shrinkage length; (3) the temperature profile above the ignition source; (4) the melting point and the fire point of each material; and (5) the melting/burning profile of each material via infrared (IR) imaging. The results of these experimentations are presented.

  13. Hydrodynamics of strongly coupled non-conformal fluids from gauge/gravity duality

    NASA Astrophysics Data System (ADS)

    Springer, Todd

    2009-08-01

    The subject of relativistic hydrodynamics is explored using the tools of gauge/gravity duality. A brief literature review of AdS/CFT and gauge/gravity duality is presented first. This is followed by a pedagogical introduction to the use of these methods in determining hydrodynamic dispersion relations, w(q), of perturbations in a strongly coupled fluid. Shear and sound mode perturbations are examined in a special class of gravity duals: those where the matter supporting the metric is scalar in nature. Analytical solutions (to order q^4 and q^3 respectively) for the shear and sound mode dispersion relations are presented for a subset of these backgrounds. The work presented here is based on previous publications by the same author, though some previously unpublished results are also included. In particular, the subleading term in the shear mode dispersion relation is analyzed using the AdS/CFT correspondence without any reference to the black hole membrane paradigm.

  14. Effect of surface tension on the behavior of adhesive contact based on Lennard-Jones potential law

    NASA Astrophysics Data System (ADS)

    Zhu, Xinyao; Xu, Wei

    2018-02-01

    The present study explores the effect of surface tension on adhesive contact behavior where the adhesion is interpreted by long-range intermolecular forces. The adhesive contact is analyzed using the equivalent system of a rigid sphere and an elastic half space covered by a membrane with surface tension. The long-range intermolecular forces are modeled with the Lennard‒Jones (L‒J) potential law. The current adhesive contact issue can be represented by a nonlinear integral equation, which can be solved by Newton‒Raphson method. In contrast to previous studies which consider intermolecular forces as short-range, the present study reveals more details of the features of adhesive contact with surface tension, in terms of jump instabilities, pull-off forces, pressure distribution within the contact area, etc. The transition of the pull-off force is not only consistent with previous studies, but also presents some new interesting characteristics in the current situation.

  15. Initial Single Event Effects Testing of the Xilinx Virtex-4 Field Programmable Gate Array

    NASA Technical Reports Server (NTRS)

    Allen, Gregory R.; Swift, Gary M.; Carmichael, C.; Tseng, C.

    2007-01-01

    We present initial results for the thin epitaxial Xilinx Virtex-4 Fie ld Programmable Gate Array (FPGA), and compare to previous results ob tained for the Virtex-II and Virtex-II Pro. The data presented was a cquired through a consortium based effort with the common goal of pr oviding the space community with data and mitigation methods for the use of Xilinx FPGAs in space.

  16. The aggregated unfitted finite element method for elliptic problems

    NASA Astrophysics Data System (ADS)

    Badia, Santiago; Verdugo, Francesc; Martín, Alberto F.

    2018-07-01

    Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hinders the practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method, is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the new method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments.

  17. Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories

    NASA Astrophysics Data System (ADS)

    Burnier, Yannis; Rothkopf, Alexander

    2013-11-01

    We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.

  18. Bayesian approach to spectral function reconstruction for Euclidean quantum field theories.

    PubMed

    Burnier, Yannis; Rothkopf, Alexander

    2013-11-01

    We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33T(C).

  19. Scaling of Two-Phase Flows to Partial-Earth Gravity

    NASA Technical Reports Server (NTRS)

    Hurlbert, Kathryn M.; Witte, Larry C.

    2003-01-01

    A report presents a method of scaling, to partial-Earth gravity, of parameters that describe pressure drops and other characteristics of two-phase (liquid/ vapor) flows. The development of the method was prompted by the need for a means of designing two-phase flow systems to operate on the Moon and on Mars, using fluid-properties and flow data from terrestrial two-phase-flow experiments, thus eliminating the need for partial-gravity testing. The report presents an explicit procedure for designing an Earth-based test bed that can provide hydrodynamic similarity with two-phase fluids flowing in partial-gravity systems. The procedure does not require prior knowledge of the flow regime (i.e., the spatial orientation of the phases). The method also provides for determination of pressure drops in two-phase partial-gravity flows by use of a generalization of the classical Moody chart (previously applicable to single-phase flow only). The report presents experimental data from Mars- and Moon-activity experiments that appear to demonstrate the validity of this method.

  20. The variational method in quantum mechanics: an elementary introduction

    NASA Astrophysics Data System (ADS)

    Borghi, Riccardo

    2018-05-01

    Variational methods in quantum mechanics are customarily presented as invaluable techniques to find approximate estimates of ground state energies. In the present paper a short catalogue of different celebrated potential distributions (both 1D and 3D), for which an exact and complete (energy and wavefunction) ground state determination can be achieved in an elementary way, is illustrated. No previous knowledge of calculus of variations is required. Rather, in all presented cases the exact energy functional minimization is achieved by using only a couple of simple mathematical tricks: ‘completion of square’ and integration by parts. This makes our approach particularly suitable for undergraduates. Moreover, the key role played by particle localization is emphasized through the entire analysis. This gentle introduction to the variational method could also be potentially attractive for more expert students as a possible elementary route toward a rather advanced topic on quantum mechanics: the factorization method. Such an unexpected connection is outlined in the final part of the paper.

  1. Unwed fathers' ability to pay child support: new estimates accounting for multiple-partner fertility.

    PubMed

    Sinkewicz, Marilyn; Garfinkel, Irwin

    2009-05-01

    We present new estimates of unwed fathers' ability to pay child support. Prior research relied on surveys that drastically undercounted nonresident unwed fathers and provided no link to their children who lived in separate households. To overcome these limitations, previous research assumed assortative mating and that each mother partnered with one father who was actually eligible to pay support and had no other child support obligations. Because the Fragile Families and Child Wellbeing Study contains data on couples, multiple-partner fertility, and a rich array of other previously unmeasured characteristics of fathers, it is uniquely suited to address the limitations of previous research. We also use an improved method of dealing with missing data. Our findings suggest that previous research overestimated the aggregate ability of unwed nonresident fathers to pay child support by 33% to 60%.

  2. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods.

    PubMed

    Suleimanov, Yury V; Green, William H

    2015-09-08

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes.

  3. Prediction of binding hot spot residues by using structural and evolutionary parameters

    PubMed Central

    2009-01-01

    In this work, we present a method for predicting hot spot residues by using a set of structural and evolutionary parameters. Unlike previous studies, we use a set of parameters which do not depend on the structure of the protein in complex, so that the predictor can also be used when the interface region is unknown. Despite the fact that no information concerning proteins in complex is used for prediction, the application of the method to a compiled dataset described in the literature achieved a performance of 60.4%, as measured by F-Measure, corresponding to a recall of 78.1% and a precision of 49.5%. This result is higher than those reported by previous studies using the same data set. PMID:21637529

  4. Emergency diagnosis of cancer and previous general practice consultations: insights from linked patient survey data

    PubMed Central

    Abel, Gary A; Mendonca, Silvia C; McPhail, Sean; Zhou, Yin; Elliss-Brookes, Lucy; Lyratzopoulos, Georgios

    2017-01-01

    Background Emergency diagnosis of cancer is common and aetiologically complex. The proportion of emergency presenters who have consulted previously with relevant symptoms is uncertain. Aim To examine how many patients with cancer, who were diagnosed as emergencies, have had previous primary care consultations with relevant symptoms; and among those, to examine how many had multiple consultations. Design and setting Secondary analysis of patient survey data from the 2010 English Cancer Patient Experience Survey (CPES), previously linked to population-based data on diagnostic route. Method For emergency presenters with 18 different cancers, associations were examined for two outcomes (prior GP consultation status; and ‘three or more consultations’ among prior consultees) using logistic regression. Results Among 4647 emergency presenters, 1349 (29%) reported no prior consultations, being more common in males (32% versus 25% in females, P<0.001), older (44% in ≥85 versus 30% in 65–74-year-olds, P<0.001), and the most deprived (35% versus 25% least deprived, P = 0.001) patients; and highest/lowest for patients with brain cancer (46%) and mesothelioma (13%), respectively (P<0.001 for overall variation by cancer site). Among 3298 emergency presenters with prior consultations, 1356 (41%) had three or more consultations, which were more likely in females (P<0.001), younger (P<0.001), and non-white patients (P = 0.017) and those with multiple myeloma, and least likely for patients with leukaemia (P<0.001). Conclusion Contrary to suggestions that emergency presentations represent missed diagnoses, about one-third of emergency presenters (particularly those in older and more deprived groups) have no prior GP consultations. Furthermore, only about one-third report multiple (three or more) consultations, which are more likely in ‘harder-to-suspect’ groups. PMID:28438775

  5. Differences between Presentation Methods in Working Memory Procedures: A Matter of Working Memory Consolidation

    PubMed Central

    Ricker, Timothy J.; Cowan, Nelson

    2014-01-01

    Understanding forgetting from working memory, the memory used in ongoing cognitive processing, is critical to understanding human cognition. In the last decade a number of conflicting findings have been reported regarding the role of time in forgetting from working memory. This has led to a debate concerning whether longer retention intervals necessarily result in more forgetting. An obstacle to directly comparing conflicting reports is a divergence in methodology across studies. Studies which find no forgetting as a function of retention-interval duration tend to use sequential presentation of memory items, while studies which find forgetting as a function of retention-interval duration tend to use simultaneous presentation of memory items. Here, we manipulate the duration of retention and the presentation method of memory items, presenting items either sequentially or simultaneously. We find that these differing presentation methods can lead to different rates of forgetting because they tend to differ in the time available for consolidation into working memory. The experiments detailed here show that equating the time available for working memory consolidation equates the rates of forgetting across presentation methods. We discuss the meaning of this finding in the interpretation of previous forgetting studies and in the construction of working memory models. PMID:24059859

  6. Simultaneous and rapid determination of multiple component concentrations in a Kraft liquor process stream

    DOEpatents

    Li, Jian [Marietta, GA; Chai, Xin Sheng [Atlanta, GA; Zhu, Junyoung [Marietta, GA

    2008-06-24

    The present invention is a rapid method of determining the concentration of the major components in a chemical stream. The present invention is also a simple, low cost, device of determining the in-situ concentration of the major components in a chemical stream. In particular, the present invention provides a useful method for simultaneously determining the concentrations of sodium hydroxide, sodium sulfide and sodium carbonate in aqueous kraft pulping liquors through use of an attenuated total reflectance (ATR) tunnel flow cell or optical probe capable of producing a ultraviolet absorbency spectrum over a wavelength of 190 to 300 nm. In addition, the present invention eliminates the need for manual sampling and dilution previously required to generate analyzable samples. The inventive method can be used in Kraft pulping operations to control white liquor causticizing efficiency, sulfate reduction efficiency in green liquor, oxidation efficiency for oxidized white liquor and the active and effective alkali charge to kraft pulping operations.

  7. Self-calibrating models for dynamic monitoring and diagnosis

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1994-01-01

    The present goal in qualitative reasoning is to develop methods for automatically building qualitative and semiquantitative models of dynamic systems and to use them for monitoring and fault diagnosis. The qualitative approach to modeling provides a guarantee of coverage while our semiquantitative methods support convergence toward a numerical model as observations are accumulated. We have developed and applied methods for automatic creation of qualitative models, developed two methods for obtaining tractable results on problems that were previously intractable for qualitative simulation, and developed more powerful methods for learning semiquantitative models from observations and deriving semiquantitative predictions from them. With these advances, qualitative reasoning comes significantly closer to realizing its aims as a practical engineering method.

  8. Method: automatic segmentation of mitochondria utilizing patch classification, contour pair classification, and automatically seeded level sets

    PubMed Central

    2012-01-01

    Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be possible to replace this with other texture identifiers, and we plan to explore this in future work. PMID:22321695

  9. Sea level rise from the Greenland Ice Sheet during the Eemian interglacial: Review of previous work with focus on the surface mass balance

    NASA Astrophysics Data System (ADS)

    Plach, Andreas; Hestnes Nisancioglu, Kerim

    2016-04-01

    The contribution from the Greenland Ice Sheet (GIS) to the global sea level rise during the Eemian interglacial (about 125,000 year ago) was the focus of many studies in the past. A main reason for the interest in this period is the considerable warmer climate during the Eemian which is often seen as an equivalent for possible future climate conditions. Simulated sea level rise during the Eemian can therefore be used to better understand a possible future sea level rise. The most recent assessment report of the Intergovernmental Panel on Climate Change (IPCC AR5) gives an overview of several studies and discusses the possible implications for a future sea level rise. The report also reveals the big differences between these studies in terms of simulated GIS extent and corresponding sea level rise. The present study gives a more exhaustive review of previous work discussing sea level rise from the GIS during the Eemian interglacial. The smallest extents of the GIS simulated by various authors are shown and summarized. A focus is thereby given to the methods used to calculate the surface mass balance. A hypothesis of the present work is that the varying results of the previous studies can largely be explained due to the various methods used to calculate the surface mass balance. In addition, as a first step for future work, the surface mass balance of the GIS for a proxy-data derived forcing ("index method") and a direct forcing with a General Circulation Model (GCM) are shown and discussed.

  10. Acute fatal hemorrhage from previously undiagnosed cerebral arteriovenous malformations in children: a single-center experience.

    PubMed

    Riordan, Coleman P; Orbach, Darren B; Smith, Edward R; Scott, R Michael

    2018-06-01

    OBJECTIVE The most significant adverse outcome of intracranial hemorrhage from an arteriovenous malformation (AVM) is death. This study reviews a single-center experience with pediatric AVMs to quantify the incidence and characterize clinical and radiographic factors associated with sudden death from the hemorrhage of previously undiagnosed AVMs in children. METHODS A single-center database review of the period from 2006 to 2017 identified all patients with a first-time intracranial hemorrhage from a previously undiagnosed AVM. Clinical and radiographic data were collected and compared between patients who survived to hospital discharge and those who died at presentation. RESULTS A total of 57 patients (average age 10.8 years, range 0.1-19 years) presented with first-time intracranial hemorrhage from a previously undiagnosed AVM during the study period. Of this group, 7/57 (12%) patients (average age 11.5 years, range 6-16 years) suffered hemorrhages that led directly to their deaths. Compared to the cohort of patients who survived their hemorrhage, patients who died were 4 times more likely to have an AVM in the posterior fossa. No clear pattern of antecedent triggering activity (sports, trauma, etc.) was identified, and 3/7 (43%) experienced cardiac arrest in the prehospital setting. Surviving patients were ultimately treated with resection of the AVM in 42/50 (84%) of cases. CONCLUSIONS Children who present with hemorrhage from a previously undiagnosed intracranial AVM had a 12% chance of sudden death in our single-institution series of pediatric cerebrovascular cases. Clinical triggers of hemorrhage are unpredictable, but subsequent radiographic evidence of a posterior fossa AVM was present in 57% of fatal cases, and all fatal cases were in locations with high risk of potential herniation. These data support a proactive, aggressive approach toward definitive treatment of AVMs in children.

  11. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  12. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  13. Layout optimization using the homogenization method

    NASA Technical Reports Server (NTRS)

    Suzuki, Katsuyuki; Kikuchi, Noboru

    1993-01-01

    A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.

  14. High-power baseline and motoring test results for the GPU-3 Stirling engine

    NASA Technical Reports Server (NTRS)

    Thieme, L. G.

    1981-01-01

    Test results are given for the full power range of the engine with both helium and hydrogen working fluids. Comparisons are made to previous testing using an alternator and resistance load bank to absorb the engine output. Indicated power results are presented as determined by several methods. Motoring tests were run to aid in determining engine mechanical losses. Comparisons are made between the results of motoring and energy-balance methods for finding mechanical losses.

  15. Quantum Gibbs ensemble Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fantoni, Riccardo, E-mail: rfantoni@ts.infn.it; Moroni, Saverio, E-mail: moroni@democritos.it

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  16. Mathematical programming for the efficient allocation of health care resources.

    PubMed

    Stinnett, A A; Paltiel, A D

    1996-10-01

    Previous discussions of methods for the efficient allocation of health care resources subject to a budget constraint have relied on unnecessarily restrictive assumptions. This paper makes use of established optimization techniques to demonstrate that a general mathematical programming framework can accommodate much more complex information regarding returns to scale, partial and complete indivisibility and program interdependence. Methods are also presented for incorporating ethical constraints into the resource allocation process, including explicit identification of the cost of equity.

  17. Nonlinear Analysis of Bonded Composite Tubular Lap Joints

    NASA Technical Reports Server (NTRS)

    Oterkus, E.; Madenci, E.; Smeltzer, S. S., III; Ambur, D. R.

    2005-01-01

    The present study describes a semi-analytical solution method for predicting the geometrically nonlinear response of a bonded composite tubular single-lap joint subjected to general loading conditions. The transverse shear and normal stresses in the adhesive as well as membrane stress resultants and bending moments in the adherends are determined using this method. The method utilizes the principle of virtual work in conjunction with nonlinear thin-shell theory to model the adherends and a cylindrical shear lag model to represent the kinematics of the thin adhesive layer between the adherends. The kinematic boundary conditions are imposed by employing the Lagrange multiplier method. In the solution procedure, the displacement components for the tubular joint are approximated in terms of non-periodic and periodic B-Spline functions in the longitudinal and circumferential directions, respectively. The approach presented herein represents a rapid-solution alternative to the finite element method. The solution method was validated by comparison against a previously considered tubular single-lap joint. The steep variation of both peeling and shearing stresses near the adhesive edges was successfully captured. The applicability of the present method was also demonstrated by considering tubular bonded lap-joints subjected to pure bending and torsion.

  18. Particle analysis using laser ablation mass spectroscopy

    DOEpatents

    Parker, Eric P.; Rosenthal, Stephen E.; Trahan, Michael W.; Wagner, John S.

    2003-09-09

    The present invention provides a method of quickly identifying bioaerosols by class, even if the subject bioaerosol has not been previously encountered. The method begins by collecting laser ablation mass spectra from known particles. The spectra are correlated with the known particles, including the species of particle and the classification (e.g., bacteria). The spectra can then be used to train a neural network, for example using genetic algorithm-based training, to recognize each spectra and to recognize characteristics of the classifications. The spectra can also be used in a multivariate patch algorithm. Laser ablation mass specta from unknown particles can be presented as inputs to the trained neural net for identification as to classification. The description below first describes suitable intelligent algorithms and multivariate patch algorithms, then presents an example of the present invention including results.

  19. Exploring student learning profiles in algebra-based studio physics: A person-centered approach

    NASA Astrophysics Data System (ADS)

    Pond, Jarrad W. T.; Chini, Jacquelyn J.

    2017-06-01

    In this study, we explore the strategic self-regulatory and motivational characteristics of students in studio-mode physics courses at three universities with varying student populations and varying levels of success in their studio-mode courses. We survey students using questions compiled from several existing questionnaires designed to measure students' study strategies, attitudes toward and motivations for learning physics, organization of scientific knowledge, experiences outside the classroom, and demographics. Using a person-centered approach, we utilize cluster analysis methods to group students into learning profiles based on their individual responses to better understand the strategies and motives of algebra-based studio physics students. Previous studies have identified five distinct learning profiles across several student populations using similar methods. We present results from first-semester and second-semester studio-mode introductory physics courses across three universities. We identify these five distinct learning profiles found in previous studies to be present within our population of introductory physics students. In addition, we investigate interactions between these learning profiles and student demographics. We find significant interactions between a student's learning profile and their experience with high school physics, major, gender, grade expectation, and institution. Ultimately, we aim to use this method of analysis to take the characteristics of students into account in the investigation of successful strategies for using studio methods of physics instruction within and across institutions.

  20. An Auto-Calibrating Knee Flexion-Extension Axis Estimator Using Principal Component Analysis with Inertial Sensors.

    PubMed

    McGrath, Timothy; Fineman, Richard; Stirling, Leia

    2018-06-08

    Inertial measurement units (IMUs) have been demonstrated to reliably measure human joint angles—an essential quantity in the study of biomechanics. However, most previous literature proposed IMU-based joint angle measurement systems that required manual alignment or prescribed calibration motions. This paper presents a simple, physically-intuitive method for IMU-based measurement of the knee flexion/extension angle in gait without requiring alignment or discrete calibration, based on computationally-efficient and easy-to-implement Principle Component Analysis (PCA). The method is compared against an optical motion capture knee flexion/extension angle modeled through OpenSim. The method is evaluated using both measured and simulated IMU data in an observational study ( n = 15) with an absolute root-mean-square-error (RMSE) of 9.24∘ and a zero-mean RMSE of 3.49∘. Variation in error across subjects was found, made emergent by the larger subject population than previous literature considers. Finally, the paper presents an explanatory model of RMSE on IMU mounting location. The observational data suggest that RMSE of the method is a function of thigh IMU perturbation and axis estimation quality. However, the effect size for these parameters is small in comparison to potential gains from improved IMU orientation estimations. Results also highlight the need to set relevant datums from which to interpret joint angles for both truth references and estimated data.

  1. Calculation of heat transfer on shuttle type configurations including the effects of variable entropy at boundary layer edge

    NASA Technical Reports Server (NTRS)

    Dejarnette, F. R.

    1972-01-01

    A relatively simple method is presented for including the effect of variable entropy at the boundary-layer edge in a heat transfer method developed previously. For each inviscid surface streamline an approximate shockwave shape is calculated using a modified form of Maslen's method for inviscid axisymmetric flows. The entropy for the streamline at the edge of the boundary layer is determined by equating the mass flux through the shock wave to that inside the boundary layer. Approximations used in this technique allow the heating rates along each inviscid surface streamline to be calculated independent of the other streamlines. The shock standoff distances computed by the present method are found to compare well with those computed by Maslen's asymmetric method. Heating rates are presented for blunted circular and elliptical cones and a typical space shuttle orbiter at angles of attack. Variable entropy effects are found to increase heating rates downstream of the nose significantly higher than those computed using normal-shock entropy, and turbulent heating rates increased more than laminar rates. Effects of Reynolds number and angles of attack are also shown.

  2. Identifying and Tracking Solar Photospheric Bright Points Based on Three-dimensional Segmentation Technology

    NASA Astrophysics Data System (ADS)

    Xiong, J. P.; Zhang, A. L.; Ji, K. F.; Feng, S.; Deng, H.; Yang, Y. F.

    2016-01-01

    Photospheric bright points (PBPs) are tiny and short-lived phenomena which can be seen within dark inter-granular lanes. In this paper, we develop a new method to identify and track the PBPs in the three-dimensional data cube. Different from the previous way such as Detection-Before-Tracking, this method is based on the Tracking-While-Detection. Using this method, the whole lifetime of a PBP can be accurately measured while this PBP is possibly separated into several with Laplacian and morphological dilation (LMD) method due to its weak intensity sometimes. With consideration of the G-band PBPs observed by Hinode/SOT (Solar Optical Telescope) for more than two hours, we find that the isolated PBPs have an average lifetime of 3 minutes, and the longest one is up to 27 minutes, which are greater than the values detected by the previous LMD method. Furthermore, we also find that the mean intensity of PBPs is 1.02 times of the mean photospheric intensity, which is less than the values detected by LMD method, and the intensity of PBPs presents a period of oscillation with 2-3 minutes during the whole lifetime.

  3. Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.

  4. Advances in dual-tone development for pitch frequency doubling

    NASA Astrophysics Data System (ADS)

    Fonseca, Carlos; Somervell, Mark; Scheer, Steven; Kuwahara, Yuhei; Nafus, Kathleen; Gronheid, Roel; Tarutani, Shinji; Enomoto, Yuuichiro

    2010-04-01

    Dual-tone development (DTD) has been previously proposed as a potential cost-effective double patterning technique1. DTD was reported as early as in the late 1990's2. The basic principle of dual-tone imaging involves processing exposed resist latent images in both positive tone (aqueous base) and negative tone (organic solvent) developers. Conceptually, DTD has attractive cost benefits since it enables pitch doubling without the need for multiple etch steps of patterned resist layers. While the concept for DTD technique is simple to understand, there are many challenges that must be overcome and understood in order to make it a manufacturing solution. Previous work by the authors demonstrated feasibility of DTD imaging for 50nm half-pitch features at 0.80NA (k1 = 0.21) and discussed challenges lying ahead for printing sub-40nm half-pitch features with DTD. While previous experimental results suggested that clever processing on the wafer track can be used to enable DTD beyond 50nm halfpitch, it also suggest that identifying suitable resist materials or chemistries is essential for achieving successful imaging results with novel resist processing methods on the wafer track. In this work, we present recent advances in the search for resist materials that work in conjunction with novel resist processing methods on the wafer track to enable DTD. Recent experimental results with new resist chemistries, specifically designed for DTD, are presented in this work. We also present simulation studies that help and support identifying resist properties that could enable DTD imaging, which ultimately lead to producing viable DTD resist materials.

  5. Analytical solution for vacuum preloading considering the nonlinear distribution of horizontal permeability within the smear zone.

    PubMed

    Peng, Jie; He, Xiang; Ye, Hanming

    2015-01-01

    The vacuum preloading is an effective method which is widely used in ground treatment. In consolidation analysis, the soil around prefabricated vertical drain (PVD) is traditionally divided into smear zone and undisturbed zone, both with constant permeability. In reality, the permeability of soil changes continuously within the smear zone. In this study, the horizontal permeability coefficient of soil within the smear zone is described by an exponential function of radial distance. A solution for vacuum preloading consolidation considers the nonlinear distribution of horizontal permeability within the smear zone is presented and compared with previous analytical results as well as a numerical solution, the results show that the presented solution correlates well with the numerical solution, and is more precise than previous analytical solution.

  6. Analytical solution for vacuum preloading considering the nonlinear distribution of horizontal permeability within the smear zone

    PubMed Central

    Peng, Jie; He, Xiang; Ye, Hanming

    2015-01-01

    The vacuum preloading is an effective method which is widely used in ground treatment. In consolidation analysis, the soil around prefabricated vertical drain (PVD) is traditionally divided into smear zone and undisturbed zone, both with constant permeability. In reality, the permeability of soil changes continuously within the smear zone. In this study, the horizontal permeability coefficient of soil within the smear zone is described by an exponential function of radial distance. A solution for vacuum preloading consolidation considers the nonlinear distribution of horizontal permeability within the smear zone is presented and compared with previous analytical results as well as a numerical solution, the results show that the presented solution correlates well with the numerical solution, and is more precise than previous analytical solution. PMID:26447973

  7. Amount of Postcue Encoding Predicts Amount of Directed Forgetting

    ERIC Educational Resources Information Center

    Pastotter, Bernhard; Bauml, Karl-Heinz

    2010-01-01

    In list-method directed forgetting, participants are cued to intentionally forget a previously studied list (List 1) before encoding a subsequently presented list (List 2). Compared with remember-cued participants, forget-cued participants typically show impaired recall of List 1 and improved recall of List 2, referred to as List 1 forgetting and…

  8. Control of Cost in Prospective Memory: Evidence for Spontaneous Retrieval Processes

    ERIC Educational Resources Information Center

    Scullin, Michael K.; McDaniel, Mark A.; Einstein, Gilles O.

    2010-01-01

    To examine the processes that support prospective remembering, previous research has often examined whether the presence of a prospective memory task slows overall responding on an ongoing task. Although slowed task performance suggests that monitoring is present, this method does not clearly establish whether monitoring is functionally related to…

  9. An Evaluation of Student Team Teaching in Sophomore Physics Classes. Final Report.

    ERIC Educational Resources Information Center

    Thrasher, Paul H.

    In the present document the effectiveness of a student team teaching technique is evaluated in comparison with the lecture method. The team teaching technique, previously used for upper division and graduate physics courses, was, for this study, used in a sophomore physics, electricity and magnetism course for engineers, mathematicians, chemists,…

  10. Meeting the Needs of the 21st Century Student

    ERIC Educational Resources Information Center

    Niles, Phyllis

    2011-01-01

    This paper will examine the learning needs of millennial students, a generation different from any previous generation, so librarians should adjust their teaching methods to accommodate their needs. Should we, as librarians, consider changing our reference services--the way we present instruction and the materials that we order for the library?…

  11. The Two-Semester Thesis Model: Emphasizing Research in Undergraduate Technical Communication Curricula

    ERIC Educational Resources Information Center

    Ford, Julie Dyke; Bracken, Jennifer L.; Wilson, Gregory D.

    2009-01-01

    This article addresses previous arguments that call for increased emphasis on research in technical communication programs. Focusing on the value of scholarly-based research at the undergraduate level, we present New Mexico Tech's thesis model as an example of helping students develop familiarity with research skills and methods. This two-semester…

  12. Functional Dysphonia during Mental Imagery: Testing the Trait Theory of Voice Disorders

    ERIC Educational Resources Information Center

    van Mersbergen, Miriam; Patrick, Christopher; Glaze, Leslie

    2008-01-01

    Purpose: Previous research has proposed that persons with functional dysphonia (FD) present with temperamental traits that predispose them to their voice disorder. We investigated this theory in a controlled experiment and compared them with social anxiety (SA) and healthy control (HC) groups. Method: Twelve participants with FD, 19 participants…

  13. Children with ADHD and Depression: A Multisource, Multimethod Assessment of Clinical, Social, and Academic Functioning

    ERIC Educational Resources Information Center

    Blackman, Gabrielle L.; Ostrander, Rick; Herman, Keith C.

    2005-01-01

    Although ADHD and depression are common comorbidities in youth, few studies have examined this particular clinical presentation. To address method bias limitations of previous research, this study uses multiple informants to compare the academic, social, and clinical functioning of children with ADHD, children with ADHD and depression, and…

  14. Haemophilus haemolyticus Isolates Causing Clinical Disease

    PubMed Central

    Wang, Xin; Briere, Elizabeth C.; Katz, Lee S.; Cohn, Amanda C.; Clark, Thomas A.; Messonnier, Nancy E.; Mayer, Leonard W.

    2012-01-01

    We report seven cases of Haemophilus haemolyticus invasive disease detected in the United States, which were previously misidentified as nontypeable Haemophilus influenzae. All cases had different symptoms and presentations. Our study suggests that a testing scheme that includes reliable PCR assays and standard microbiological methods should be used in order to improve H. haemolyticus identification. PMID:22573587

  15. Haemophilus haemolyticus isolates causing clinical disease.

    PubMed

    Anderson, Raydel; Wang, Xin; Briere, Elizabeth C; Katz, Lee S; Cohn, Amanda C; Clark, Thomas A; Messonnier, Nancy E; Mayer, Leonard W

    2012-07-01

    We report seven cases of Haemophilus haemolyticus invasive disease detected in the United States, which were previously misidentified as nontypeable Haemophilus influenzae. All cases had different symptoms and presentations. Our study suggests that a testing scheme that includes reliable PCR assays and standard microbiological methods should be used in order to improve H. haemolyticus identification.

  16. Deal or No Deal? Evaluating Big Deals and Their Journals

    ERIC Educational Resources Information Center

    Blecic, Deborah D.; Wiberley, Stephen E., Jr.; Fiscella, Joan B.; Bahnmaier-Blaszczak, Sara; Lowery, Rebecca

    2013-01-01

    This paper presents methods to develop metrics that compare Big Deal journal packages and the journals within those packages. Deal-level metrics guide selection of a Big Deal for termination. Journal-level metrics guide selection of individual subscriptions from journals previously provided by a terminated deal. The paper argues that, while the…

  17. Controlled Multivariate Evaluation of Open Education: Application of a Critical Model.

    ERIC Educational Resources Information Center

    Sewell, Alan F.; And Others

    This paper continues previous reports of a controlled multivariate evaluation of a junior high school open-education program. A new method of estimating program objectives and implementation is presented, together with the nature and degree of obtained student outcomes. Open-program students were found to approve more highly of their learning…

  18. The Fun Culture in Seniors' Online Communities

    ERIC Educational Resources Information Center

    Nimrod, Galit

    2011-01-01

    Purpose of the study: Previous research found that "fun on line" is the most dominant content in seniors' online communities. The present study aimed to further explore the "fun culture" in these communities and to discover its unique qualities. Design and Methods: The study applied an online ethnography (netnography) approach, utilizing a full…

  19. Modelling the normal bouncing dynamics of spheres in a viscous fluid

    NASA Astrophysics Data System (ADS)

    Izard, Edouard; Lacaze, Laurent; Bonometti, Thomas

    2017-06-01

    Bouncing motions of spheres in a viscous fluid are numerically investigated by an immersed boundary method to resolve the fluid flow around solids which is combined to a discrete element method for the particles motion and contact resolution. Two well-known configurations of bouncing are considered: the normal bouncing of a sphere on a wall in a viscous fluid and a normal particle-particle bouncing in a fluid. Previous experiments have shown the effective restitution coefficient to be a function of a single parameter, namely the Stokes number which compares the inertia of the solid particle with the fluid viscous dissipation. The present simulations show a good agreement with experimental observations for the whole range of investigated parameters. However, a new definition of the coefficient of restitution presented here shows a dependence on the Stokes number as in previous works but, in addition, on the fluid to particle density ratio. It allows to identify the viscous, inertial and dry regimes as found in experiments of immersed granular avalanches of Courrech du Pont et al. Phys. Rev. Lett. 90, 044301 (2003), e.g. in a multi-particle configuration.

  20. Accelerating Smith-Waterman Algorithm for Biological Database Search on CUDA-Compatible GPUs

    NASA Astrophysics Data System (ADS)

    Munekawa, Yuma; Ino, Fumihiko; Hagihara, Kenichi

    This paper presents a fast method capable of accelerating the Smith-Waterman algorithm for biological database search on a cluster of graphics processing units (GPUs). Our method is implemented using compute unified device architecture (CUDA), which is available on the nVIDIA GPU. As compared with previous methods, our method has four major contributions. (1) The method efficiently uses on-chip shared memory to reduce the data amount being transferred between off-chip video memory and processing elements in the GPU. (2) It also reduces the number of data fetches by applying a data reuse technique to query and database sequences. (3) A pipelined method is also implemented to overlap GPU execution with database access. (4) Finally, a master/worker paradigm is employed to accelerate hundreds of database searches on a cluster system. In experiments, the peak performance on a GeForce GTX 280 card reaches 8.32 giga cell updates per second (GCUPS). We also find that our method reduces the amount of data fetches to 1/140, achieving approximately three times higher performance than a previous CUDA-based method. Our 32-node cluster version is approximately 28 times faster than a single GPU version. Furthermore, the effective performance reaches 75.6 giga instructions per second (GIPS) using 32 GeForce 8800 GTX cards.

  1. Detection of alpha-fetoprotein in magnetic immunoassay of thin channels using biofunctional nanoparticles

    NASA Astrophysics Data System (ADS)

    Tsai, H. Y.; Gao, B. Z.; Yang, S. F.; Li, C. S.; Fuh, C. Bor

    2014-01-01

    This paper presents the use of fluorescent biofunctional nanoparticles (10-30 nm) to detect alpha-fetoprotein (AFP) in a thin-channel magnetic immunoassay. We used an AFP model biomarker and s-shaped deposition zones to test the proposed detection method. The results show that the detection using fluorescent biofunctional nanoparticle has a higher throughput than that of functional microparticle used in previous experiments on affinity reactions. The proposed method takes about 3 min (versus 150 min of previous method) to detect 100 samples. The proposed method is useful for screening biomarkers in clinical applications, and can reduce the run time for sandwich immunoassays to less than 20 min. The detection limits (0.06 pg/ml) and linear ranges (0.068 pg/ml-0.68 ng/ml) of AFP using fluorescent biofunctional nanoparticles are the same as those of using functional microparticles within experimental errors. This detection limit is substantially lower and the linear range is considerably wider than those of enzyme-linked immunosorbent assay (ELISA) and other methods in sandwich immunoassay methods. The differences between this method and an ELISA in AFP measurements of serum samples were less than 12 %. The proposed method provides simple, fast, and sensitive detection with a high throughput for biomarkers.

  2. Comparative Analysis of a Principal Component Analysis-Based and an Artificial Neural Network-Based Method for Baseline Removal.

    PubMed

    Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G

    2016-04-01

    This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.

  3. Detecting the borders between coding and non-coding DNA regions in prokaryotes based on recursive segmentation and nucleotide doublets statistics

    PubMed Central

    2012-01-01

    Background Detecting the borders between coding and non-coding regions is an essential step in the genome annotation. And information entropy measures are useful for describing the signals in genome sequence. However, the accuracies of previous methods of finding borders based on entropy segmentation method still need to be improved. Methods In this study, we first applied a new recursive entropic segmentation method on DNA sequences to get preliminary significant cuts. A 22-symbol alphabet is used to capture the differential composition of nucleotide doublets and stop codon patterns along three phases in both DNA strands. This process requires no prior training datasets. Results Comparing with the previous segmentation methods, the experimental results on three bacteria genomes, Rickettsia prowazekii, Borrelia burgdorferi and E.coli, show that our approach improves the accuracy for finding the borders between coding and non-coding regions in DNA sequences. Conclusions This paper presents a new segmentation method in prokaryotes based on Jensen-Rényi divergence with a 22-symbol alphabet. For three bacteria genomes, comparing to A12_JR method, our method raised the accuracy of finding the borders between protein coding and non-coding regions in DNA sequences. PMID:23282225

  4. Cheating prevention in visual cryptography.

    PubMed

    Hu, Chih-Ming; Tzeng, Wen-Guey

    2007-01-01

    Visual cryptography (VC) is a method of encrypting a secret image into shares such that stacking a sufficient number of shares reveals the secret image. Shares are usually presented in transparencies. Each participant holds a transparency. Most of the previous research work on VC focuses on improving two parameters: pixel expansion and contrast. In this paper, we studied the cheating problem in VC and extended VC. We considered the attacks of malicious adversaries who may deviate from the scheme in any way. We presented three cheating methods and applied them on attacking existent VC or extended VC schemes. We improved one cheat-preventing scheme. We proposed a generic method that converts a VCS to another VCS that has the property of cheating prevention. The overhead of the conversion is near optimal in both contrast degression and pixel expansion.

  5. Transonic CFD applications at Boeing

    NASA Technical Reports Server (NTRS)

    Tinoco, E. N.

    1989-01-01

    The use of computational methods for three dimensional transonic flow design and analysis at the Boeing Company is presented. A range of computational tools consisting of production tools for every day use by project engineers, expert user tools for special applications by computational researchers, and an emerging tool which may see considerable use in the near future are described. These methods include full potential and Euler solvers, some coupled to three dimensional boundary layer analysis methods, for transonic flow analysis about nacelle, wing-body, wing-body-strut-nacelle, and complete aircraft configurations. As the examples presented show, such a toolbox of codes is necessary for the variety of applications typical of an industrial environment. Such a toolbox of codes makes possible aerodynamic advances not previously achievable in a timely manner, if at all.

  6. Study of flow over object problems by a nodal discontinuous Galerkin-lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Shen, Meng; Liu, Chen

    2018-04-01

    The flow over object problems are studied by a nodal discontinuous Galerkin-lattice Boltzmann method (NDG-LBM) in this work. Different from the standard lattice Boltzmann method, the current method applies the nodal discontinuous Galerkin method into the streaming process in LBM to solve the resultant pure convection equation, in which the spatial discretization is completed on unstructured grids and the low-storage explicit Runge-Kutta scheme is used for time marching. The present method then overcomes the disadvantage of standard LBM for depending on the uniform meshes. Moreover, the collision process in the LBM is completed by using the multiple-relaxation-time scheme. After the validation of the NDG-LBM by simulating the lid-driven cavity flow, the simulations of flows over a fixed circular cylinder, a stationary airfoil and rotating-stationary cylinders are performed. Good agreement of present results with previous results is achieved, which indicates that the current NDG-LBM is accurate and effective for flow over object problems.

  7. New method for solving inductive electric fields in the non-uniformly conducting ionosphere

    NASA Astrophysics Data System (ADS)

    Vanhamäki, H.; Amm, O.; Viljanen, A.

    2006-10-01

    We present a new calculation method for solving inductive electric fields in the ionosphere. The time series of the potential part of the ionospheric electric field, together with the Hall and Pedersen conductances serves as the input to this method. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition, no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called the Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfvén wave reflection from a uniformly conducting ionosphere.

  8. Revisiting the Estimation of Dinosaur Growth Rates

    PubMed Central

    Myhrvold, Nathan P.

    2013-01-01

    Previous growth-rate studies covering 14 dinosaur taxa, as represented by 31 data sets, are critically examined and reanalyzed by using improved statistical techniques. The examination reveals that some previously reported results cannot be replicated by using the methods originally reported; results from new methods are in many cases different, in both the quantitative rates and the qualitative nature of the growth, from results in the prior literature. Asymptotic growth curves, which have been hypothesized to be ubiquitous, are shown to provide best fits for only four of the 14 taxa. Possible reasons for non-asymptotic growth patterns are discussed; they include systematic errors in the age-estimation process and, more likely, a bias toward younger ages among the specimens analyzed. Analysis of the data sets finds that only three taxa include specimens that could be considered skeletally mature (i.e., having attained 90% of maximum body size predicted by asymptotic curve fits), and eleven taxa are quite immature, with the largest specimen having attained less than 62% of predicted asymptotic size. The three taxa that include skeletally mature specimens are included in the four taxa that are best fit by asymptotic curves. The totality of results presented here suggests that previous estimates of both maximum dinosaur growth rates and maximum dinosaur sizes have little statistical support. Suggestions for future research are presented. PMID:24358133

  9. Tail Biting Trellis Representation of Codes: Decoding and Construction

    NASA Technical Reports Server (NTRS)

    Shao. Rose Y.; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents two new iterative algorithms for decoding linear codes based on their tail biting trellises, one is unidirectional and the other is bidirectional. Both algorithms are computationally efficient and achieves virtually optimum error performance with a small number of decoding iterations. They outperform all the previous suboptimal decoding algorithms. The bidirectional algorithm also reduces decoding delay. Also presented in the paper is a method for constructing tail biting trellises for linear block codes.

  10. Detection of muramic acid in a carbohydrate fraction of human spleen.

    PubMed Central

    Hoijer, M A; Melief, M J; van Helden-Meeuwsen, C G; Eulderink, F; Hazenberg, M P

    1995-01-01

    In previous studies, we showed that peptidoglycan polysaccharides from anaerobic bacteria normally present in the human gut induced severe chronic joint inflammation in rats. Our hypothesis is that peptidoglycan from the gut flora is involved in perpetuation of idiopathic inflammation. However, in the literature, the presence of peptidoglycan or subunits like muramyl peptides in blood or tissues is still a matter of debate. We were able to stain red pulp macrophages in all six available human spleens by immunohistochemical techniques using a monoclonal antibody against gut flora-derived antigens. Therefore, these human spleens were extracted, and after removal of most of the protein, the carbohydrate fraction was investigated for the presence of muramic acid, an amino sugar characteristic for peptidoglycan. Using three different methods for detection of muramic acid, we found a mean of 3.3 mumol of muramic acid with high-pressure liquid chromatography, 1.9 mumol with a colorimetric method for detection of lactate, and 0.8 mumol with an enzymatic method for detection of D-lactate per spleen (D-lactate is a specific group of the muramic acid molecule). It is concluded that peptidoglycan is present in human spleen not as small muramyl peptides as were previously searched for by other investigators but as larger macromolecules probably stored in spleen macrophages. PMID:7729869

  11. Error analysis in inverse scatterometry. I. Modeling.

    PubMed

    Al-Assaad, Rayan M; Byrne, Dale M

    2007-02-01

    Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.

  12. Determination of the spin and recovery characteristics of a typical low-wing general aviation design

    NASA Technical Reports Server (NTRS)

    Tischler, M. B.; Barlow, J. B.

    1980-01-01

    The equilibrium spin technique implemented in a graphical form for obtaining spin and recovery characteristics from rotary balance data is outlined. Results of its application to recent rotary balance tests of the NASA Low-Wing General Aviation Aircraft are discussed. The present results, which are an extension of previously published findings, indicate the ability of the equilibrium method to accurately evaluate spin modes and recovery control effectiveness. A comparison of the calculated results with available spin tunnel and full scale findings is presented. The technique is suitable for preliminary design applications as determined from the available results and data base requirements. A full discussion of implementation considerations and a summary of the results obtained from this method to date are presented.

  13. A review of in situ propellant production techniques for solar system exploration

    NASA Technical Reports Server (NTRS)

    Hoffman, S. J.

    1983-01-01

    Representative studies done in the area of extraterrestrial chemical production as it applies to solar system exploration are presented. A description of the In Situ Propellant Production (ISPP) system is presented. Various propellant combinations and direct applications along with the previously mentioned benefits and liens are discussed. A series of mission scenarios is presented which is studied in the greatest detail. A general description of the method(s) of analysis used to study each mission is provided. Each section will be closed by an assessment of the performance advantage, if any, that can be provided by ISPP. A final section briefly summarizes those missions which, as a result of the studies completed thus far, should see a sizable benefit from the use of ISPP.

  14. Comarison of Four Methods for Teaching Phases of the Moon

    NASA Astrophysics Data System (ADS)

    Upton, Brianna; Cid, Ximena; Lopez, Ramon

    2008-03-01

    Previous studies have shown that many students have misconceptions about basic concepts in astronomy. As a consequence, various interactive engagement methods have been developed for introductory astronomy. We will present the results of a study that compares four different teaching methods for the subject of the phases of the Moon, which is well known to produce student difficulties. We compare a fairly traditional didactic approach, the use of manipulatives (moonballs) in lecture, the University of Arizona Lecture Tutorials, and an interactive computer program used in a didactic fashion. We use pre- and post-testing with the Lunar Phase Concept Inventory to determine the relative effectiveness of these methods.

  15. Ink Wash Painting Style Rendering With Physically-based Ink Dispersion Model

    NASA Astrophysics Data System (ADS)

    Wang, Yifan; Li, Weiran; Zhu, Qing

    2018-04-01

    This paper presents a real-time rendering method based on the GPU programmable pipeline for rendering the 3D scene in ink wash painting style. The method is divided into main three parts: First, render the ink properties of 3D model by calculating its vertex curvature. Then, cached the ink properties to a paper structure and using an ink dispersion model which is defined by referencing the theory of porous media to simulate the dispersion of ink. Finally, convert the ink properties to the pixel color information and render it to the screen. This method has a better performance than previous methods in visual quality.

  16. Real-Time Frequency Response Estimation Using Joined-Wing SensorCraft Aeroelastic Wind-Tunnel Data

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A; Heeg, Jennifer; Morelli, Eugene A

    2012-01-01

    A new method is presented for estimating frequency responses and their uncertainties from wind-tunnel data in real time. The method uses orthogonal phase-optimized multi- sine excitation inputs and a recursive Fourier transform with a least-squares estimator. The method was first demonstrated with an F-16 nonlinear flight simulation and results showed that accurate short period frequency responses were obtained within 10 seconds. The method was then applied to wind-tunnel data from a previous aeroelastic test of the Joined- Wing SensorCraft. Frequency responses describing bending strains from simultaneous control surface excitations were estimated in a time-efficient manner.

  17. Finger Vein Recognition Using Local Line Binary Pattern

    PubMed Central

    Rosdi, Bakhtiar Affendi; Shing, Chai Wuh; Suandi, Shahrel Azmin

    2011-01-01

    In this paper, a personal verification method using finger vein is presented. Finger vein can be considered more secured compared to other hands based biometric traits such as fingerprint and palm print because the features are inside the human body. In the proposed method, a new texture descriptor called local line binary pattern (LLBP) is utilized as feature extraction technique. The neighbourhood shape in LLBP is a straight line, unlike in local binary pattern (LBP) which is a square shape. Experimental results show that the proposed method using LLBP has better performance than the previous methods using LBP and local derivative pattern (LDP). PMID:22247670

  18. Hybrid finite element and Brownian dynamics method for charged particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, Gary A., E-mail: ghuber@ucsd.edu; Miao, Yinglong; Zhou, Shenggao

    2016-04-28

    Diffusion is often the rate-determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. A previous study introduced a new hybrid diffusion method that couples the strengths of each of these two methods, but was limited by the lack of interactions among the particles; the force on each particle had to be from an external field. This study further develops the method to allow charged particles. The method is derived for a general multidimensional system and is presented usingmore » a basic test case for a one-dimensional linear system with one charged species and a radially symmetric system with three charged species.« less

  19. Systematic approach to cutoff frequency selection in continuous-wave electron paramagnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Hirata, Hiroshi; Itoh, Toshiharu; Hosokawa, Kouichi; Deng, Yuanmu; Susaki, Hitoshi

    2005-08-01

    This article describes a systematic method for determining the cutoff frequency of the low-pass window function that is used for deconvolution in two-dimensional continuous-wave electron paramagnetic resonance (EPR) imaging. An evaluation function for the criterion used to select the cutoff frequency is proposed, and is the product of the effective width of the point spread function for a localized point signal and the noise amplitude of a resultant EPR image. The present method was applied to EPR imaging for a phantom, and the result of cutoff frequency selection was compared with that based on a previously reported method for the same projection data set. The evaluation function has a global minimum point that gives the appropriate cutoff frequency. Images with reasonably good resolution and noise suppression can be obtained from projections with an automatically selected cutoff frequency based on the present method.

  20. Control theory based airfoil design for potential flow and a finite volume discretization

    NASA Technical Reports Server (NTRS)

    Reuther, J.; Jameson, A.

    1994-01-01

    This paper describes the implementation of optimization techniques based on control theory for airfoil design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for two-dimensional profiles in which the shape is determined by a conformal transformation from a unit circle, and the control is the mapping function. The goal of our present work is to develop a method which does not depend on conformal mapping, so that it can be extended to treat three-dimensional problems. Therefore, we have developed a method which can address arbitrary geometric shapes through the use of a finite volume method to discretize the potential flow equation. Here the control law serves to provide computationally inexpensive gradient information to a standard numerical optimization method. Results are presented, where both target speed distributions and minimum drag are used as objective functions.

  1. LC-UV assay method and UPLC/Q-TOF-MS characterisation of saponins from Ilex paraguariensis A. St. Hil. (mate) unripe fruits.

    PubMed

    Peixoto, Maria Paula Garofo; Kaiser, Samuel; Verza, Simone Gasparin; de Resende, Pedro Ernesto; Treter, Janine; Pavei, Cabral; Borré, Gustavo Luís; Ortega, George González

    2012-01-01

    Ilex paraguariensis A. St. Hil. (mate) is known in several South American countries because of the use of its leaves in stimulant herbal beverages. High saponin contents were reported in mate leaves and unripe fruits that possess a dissimilar composition. Two LC-UV methods previously reported for mate saponins assay focused on mate leaves and the quantification of the less polar saponin fraction in mate fruits. To develop and validate a LC-UV method to assay the total content of saponins in unripe mate fruits and characterise the chemical structure of triterpenic saponins by UPLC/Q-TOF-MS. From unripe fruits of mate a crude ethanolic extract was prepared (EX40) and the mate saponin fraction (MSF) purified by solid phase extraction. The LC-UV method was validated using ilexoside II as external standard. UPLC/Q-TOF-MS was adjusted from the LC-UV method to obtain the fragmentation patterns of the main saponins present in unripe fruits. Both LC-UV and UPLC/Q-TOF-MS methods indicate a wide range of Ilex saponins polarity. The ilexoside II and total saponin content of EX40 were 8.20% (w/w) and 47.60% (w/w), respectively. The total saponin content in unripe fruits was 7.28% (w/w). The saponins present in MSF characterised by UPLC/Q-TOF-MS are derived mainly from ursolic/oleanolic, acetyl ursolic or pomolic acid. The validated LC-UV method was shown to be linear, precise, accurate and to cover several saponins previously isolated from Ilex species and could be applied for the quality control of unripe fruit saponins. Copyright © 2011 John Wiley & Sons, Ltd.

  2. LAPAROSCOPY AFTER PREVIOUS LAPAROTOMY

    PubMed Central

    Godinjak, Zulfo; Idrizbegović, Edin; Begić, Kerim

    2006-01-01

    Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for lap-aroscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previous laparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured. PMID:17177649

  3. Computation of Kinetics for the Hydrogen/Oxygen System Using the Thermodynamic Method

    NASA Technical Reports Server (NTRS)

    Marek, C. John

    1996-01-01

    A new method for predicting chemical rate constants using thermodynamics has been applied to the hydrogen/oxygen system. This method is based on using the gradient of the Gibbs free energy and a single proportionality constant D to determine the kinetic rate constants. Using this method the rate constants for any gas phase reaction can be computed from thermodynamic properties. A modified reaction set for the H/O system is determined. A11 of the third body efficiencies M are taken to be unity. Good agreement was obtained between the thermodynamic method and the experimental shock tube data. In addition, the hydrogen bromide experimental data presented in previous work is recomputed with M's of unity.

  4. Unwed Fathers’ Ability to Pay Child Support: New Estimates Accounting for Multiple-Partner Fertility

    PubMed Central

    SINKEWICZ, MARILYN; GARFINKEL, IRWIN

    2009-01-01

    We present new estimates of unwed fathers’ ability to pay child support. Prior research relied on surveys that drastically undercounted nonresident unwed fathers and provided no link to their children who lived in separate households. To overcome these limitations, previous research assumed assortative mating and that each mother partnered with one father who was actually eligible to pay support and had no other child support obligations. Because the Fragile Families and Child Wellbeing Study contains data on couples, multiple-partner fertility, and a rich array of other previously unmeasured characteristics of fathers, it is uniquely suited to address the limitations of previous research. We also use an improved method of dealing with missing data. Our findings suggest that previous research overestimated the aggregate ability of unwed nonresident fathers to pay child support by 33% to 60%. PMID:21305392

  5. Performance of Renormalization Group Algebraic Turbulence Model on Boundary Layer Transition Simulation

    NASA Technical Reports Server (NTRS)

    Ahn, Kyung H.

    1994-01-01

    The RNG-based algebraic turbulence model, with a new method of solving the cubic equation and applying new length scales, is introduced. An analysis is made of the RNG length scale which was previously reported and the resulting eddy viscosity is compared with those from other algebraic turbulence models. Subsequently, a new length scale is introduced which actually uses the two previous RNG length scales in a systematic way to improve the model performance. The performance of the present RNG model is demonstrated by simulating the boundary layer flow over a flat plate and the flow over an airfoil.

  6. Development of a model for predicting NASA/MSFC program success

    NASA Technical Reports Server (NTRS)

    Riggs, Jeffrey; Miller, Tracy; Finley, Rosemary

    1990-01-01

    Research conducted during the execution of a previous contract (NAS8-36955/0039) firmly established the feasibility of developing a tool to aid decision makers in predicting the potential success of proposed projects. The final report from that investigation contains an outline of the method to be applied in developing this Project Success Predictor Model. As a follow-on to the previous study, this report describes in detail the development of this model and includes full explanation of the data-gathering techniques used to poll expert opinion. The report includes the presentation of the model code itself.

  7. A class of least-squares filtering and identification algorithms with systolic array architectures

    NASA Technical Reports Server (NTRS)

    Kalson, Seth Z.; Yao, Kung

    1991-01-01

    A unified approach is presented for deriving a large class of new and previously known time- and order-recursive least-squares algorithms with systolic array architectures, suitable for high-throughput-rate and VLSI implementations of space-time filtering and system identification problems. The geometrical derivation given is unique in that no assumption is made concerning the rank of the sample data correlation matrix. This method utilizes and extends the concept of oblique projections, as used previously in the derivations of the least-squares lattice algorithms. Exponentially weighted least-squares criteria are considered for both sliding and growing memory.

  8. Strain induced on (TMTSF){2}ReO{4} microwires deposited on a silicon substrate

    NASA Astrophysics Data System (ADS)

    Colin, C. V.; Joo, N.; Pasquier, C. R.

    2009-12-01

    We present the successful recrystallization of Bechgaard salts with the microwire shape using the drop casting method. The samples are deposited on a substrate with previously prepared patterns made by optical lithography. The physical properties of the microwires are shown. The excellent transport properties show that this technique provides a new method for the tuning of the physical properties of molecular conductors and the first step toward applications. The pressure effects of the substrate on the conduction are discussed.

  9. The application of the least squares finite element method to Abel's integral equation. [with application to glow discharge problem

    NASA Technical Reports Server (NTRS)

    Balasubramanian, R.; Norrie, D. H.; De Vries, G.

    1979-01-01

    Abel's integral equation is the governing equation for certain problems in physics and engineering, such as radiation from distributed sources. The finite element method for the solution of this non-linear equation is presented for problems with cylindrical symmetry and the extension to more general integral equations is indicated. The technique was applied to an axisymmetric glow discharge problem and the results show excellent agreement with previously obtained solutions

  10. Improvements to sample processing and measurement to enable more widespread environmental application of tritium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moran, James; Alexander, Thomas; Aalseth, Craig

    2017-08-01

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120 mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of T behavior in the environment.

  11. Computation of transonic viscous-inviscid interacting flow

    NASA Technical Reports Server (NTRS)

    Whitfield, D. L.; Thomas, J. L.; Jameson, A.; Schmidt, W.

    1983-01-01

    Transonic viscous-inviscid interaction is considered using the Euler and inverse compressible turbulent boundary-layer equations. Certain improvements in the inverse boundary-layer method are mentioned, along with experiences in using various Runge-Kutta schemes to solve the Euler equations. Numerical conditions imposed on the Euler equations at a surface for viscous-inviscid interaction using the method of equivalent sources are developed, and numerical solutions are presented and compared with experimental data to illustrate essential points. Previously announced in STAR N83-17829

  12. Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants

    NASA Astrophysics Data System (ADS)

    Bzdak, Adam; Holzmann, Romain; Koch, Volker

    2016-12-01

    In this article we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012), 10.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015), 10.1103/PhysRevC.91.027901]. We will discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.

  13. New conditions for obtaining the exact solutions of the general Riccati equation.

    PubMed

    Bougoffa, Lazhar

    2014-01-01

    We propose a direct method for solving the general Riccati equation y' = f(x) + g(x)y + h(x)y(2). We first reduce it into an equivalent equation, and then we formulate the relations between the coefficients functions f(x), g(x), and h(x) of the equation to obtain an equivalent separable equation from which the previous equation can be solved in closed form. Several examples are presented to demonstrate the efficiency of this method.

  14. Transport Test Problems for Hybrid Methods Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.

    2011-12-28

    This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.

  15. Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants

    DOE PAGES

    Bzdak, Adam; Holzmann, Romain; Koch, Volker

    2016-12-19

    Here, we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012)PRVCAN0556-281310.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015)PRVCAN0556-281310.1103/PhysRevC.91.027901]. We will then discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.

  16. Methods for estimating peak-flow frequencies at ungaged sites in Montana based on data through water year 2011: Chapter F in Montana StreamStats

    USGS Publications Warehouse

    Sando, Roy; Sando, Steven K.; McCarthy, Peter M.; Dutton, DeAnn M.

    2016-04-05

    The U.S. Geological Survey (USGS), in cooperation with the Montana Department of Natural Resources and Conservation, completed a study to update methods for estimating peak-flow frequencies at ungaged sites in Montana based on peak-flow data at streamflow-gaging stations through water year 2011. The methods allow estimation of peak-flow frequencies (that is, peak-flow magnitudes, in cubic feet per second, associated with annual exceedance probabilities of 66.7, 50, 42.9, 20, 10, 4, 2, 1, 0.5, and 0.2 percent) at ungaged sites. The annual exceedance probabilities correspond to 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Regional regression analysis is a primary focus of Chapter F of this Scientific Investigations Report, and regression equations for estimating peak-flow frequencies at ungaged sites in eight hydrologic regions in Montana are presented. The regression equations are based on analysis of peak-flow frequencies and basin characteristics at 537 streamflow-gaging stations in or near Montana and were developed using generalized least squares regression or weighted least squares regression.All of the data used in calculating basin characteristics that were included as explanatory variables in the regression equations were developed for and are available through the USGS StreamStats application (http://water.usgs.gov/osw/streamstats/) for Montana. StreamStats is a Web-based geographic information system application that was created by the USGS to provide users with access to an assortment of analytical tools that are useful for water-resource planning and management. The primary purpose of the Montana StreamStats application is to provide estimates of basin characteristics and streamflow characteristics for user-selected ungaged sites on Montana streams. The regional regression equations presented in this report chapter can be conveniently solved using the Montana StreamStats application.Selected results from this study were compared with results of previous studies. For most hydrologic regions, the regression equations reported for this study had lower mean standard errors of prediction (in percent) than the previously reported regression equations for Montana. The equations presented for this study are considered to be an improvement on the previously reported equations primarily because this study (1) included 13 more years of peak-flow data; (2) included 35 more streamflow-gaging stations than previous studies; (3) used a detailed geographic information system (GIS)-based definition of the regulation status of streamflow-gaging stations, which allowed better determination of the unregulated peak-flow records that are appropriate for use in the regional regression analysis; (4) included advancements in GIS and remote-sensing technologies, which allowed more convenient calculation of basin characteristics and investigation of many more candidate basin characteristics; and (5) included advancements in computational and analytical methods, which allowed more thorough and consistent data analysis.This report chapter also presents other methods for estimating peak-flow frequencies at ungaged sites. Two methods for estimating peak-flow frequencies at ungaged sites located on the same streams as streamflow-gaging stations are described. Additionally, envelope curves relating maximum recorded annual peak flows to contributing drainage area for each of the eight hydrologic regions in Montana are presented and compared to a national envelope curve. In addition to providing general information on characteristics of large peak flows, the regional envelope curves can be used to assess the reasonableness of peak-flow frequency estimates determined using the regression equations.

  17. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  18. Aerospace Applications of Optimization under Uncertainty

    NASA Technical Reports Server (NTRS)

    Padula, Sharon; Gumbert, Clyde; Li, Wu

    2003-01-01

    The Multidisciplinary Optimization (MDO) Branch at NASA Langley Research Center develops new methods and investigates opportunities for applying optimization to aerospace vehicle design. This paper describes MDO Branch experiences with three applications of optimization under uncertainty: (1) improved impact dynamics for airframes, (2) transonic airfoil optimization for low drag, and (3) coupled aerodynamic/structures optimization of a 3-D wing. For each case, a brief overview of the problem and references to previous publications are provided. The three cases are aerospace examples of the challenges and opportunities presented by optimization under uncertainty. The present paper will illustrate a variety of needs for this technology, summarize promising methods, and uncover fruitful areas for new research.

  19. Aerospace Applications of Optimization under Uncertainty

    NASA Technical Reports Server (NTRS)

    Padula, Sharon; Gumbert, Clyde; Li, Wu

    2006-01-01

    The Multidisciplinary Optimization (MDO) Branch at NASA Langley Research Center develops new methods and investigates opportunities for applying optimization to aerospace vehicle design. This paper describes MDO Branch experiences with three applications of optimization under uncertainty: (1) improved impact dynamics for airframes, (2) transonic airfoil optimization for low drag, and (3) coupled aerodynamic/structures optimization of a 3-D wing. For each case, a brief overview of the problem and references to previous publications are provided. The three cases are aerospace examples of the challenges and opportunities presented by optimization under uncertainty. The present paper will illustrate a variety of needs for this technology, summarize promising methods, and uncover fruitful areas for new research.

  20. Tail shortening by discrete hydrodynamics

    NASA Astrophysics Data System (ADS)

    Kiefer, J.; Visscher, P. B.

    1982-02-01

    A discrete formulation of hydrodynamics was recently introduced, whose most important feature is that it is exactly renormalizable. Previous numerical work has found that it provides a more efficient and rapidly convergent method for calculating transport coefficients than the usual Green-Kubo method. The latter's convergence difficulties are due to the well-known "long-time tail" of the time correlation function which must be integrated over time. The purpose of the present paper is to present additional evidence that these difficulties are really absent in the discrete equation of motion approach. The "memory" terms in the equation of motion are calculated accurately, and shown to decay much more rapidly with time than the equilibrium time correlations do.

  1. Influence of air-jet vortex generator diameter on separation region

    NASA Astrophysics Data System (ADS)

    Szwaba, Ryszard

    2013-08-01

    Control of shock wave and boundary layer interaction continues to attract a lot of attention. In recent decades several methods of interaction control have been investigated. The research has mostly concerned solid (vane type) vortex generators and transpiration methods of suction and blowing. This investigation concerns interaction control using air-jets to generate streamwise vortices. The effectiveness of air-jet vortex generators in controlling separation has been proved in a previous research. The present paper focuses on the influence of the vortex generator diameter on the separation region. It presents the results of experimental investigations and provides new guidelines for the design of air-jet vortex generators to obtain more effective separation control.

  2. Non-recursive augmented Lagrangian algorithms for the forward and inverse dynamics of constrained flexible multibodies

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Ledesma, Ragnar

    1993-01-01

    A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.

  3. Empirical Observations on the Sensitivity of Hot Cathode Ionization Type Vacuum Gages

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1969-01-01

    A study of empirical methods of predicting tile relative sensitivities of hot cathode ionization gages is presented. Using previously published gage sensitivities, several rules for predicting relative sensitivity are tested. The relative sensitivity to different gases is shown to be invariant with gage type, in the linear range of gage operation. The total ionization cross section, molecular and molar polarizability, and refractive index are demonstrated to be useful parameters for predicting relative gage sensitivity. Using data from the literature, the probable error of predictions of relative gage sensitivity based on these molecular properties is found to be about 10 percent. A comprehensive table of predicted relative sensitivities, based on empirical methods, is presented.

  4. Numerical techniques in radiative heat transfer for general, scattering, plane-parallel media

    NASA Technical Reports Server (NTRS)

    Sharma, A.; Cogley, A. C.

    1982-01-01

    The study of radiative heat transfer with scattering usually leads to the solution of singular Fredholm integral equations. The present paper presents an accurate and efficient numerical method to solve certain integral equations that govern radiative equilibrium problems in plane-parallel geometry for both grey and nongrey, anisotropically scattering media. In particular, the nongrey problem is represented by a spectral integral of a system of nonlinear integral equations in space, which has not been solved previously. The numerical technique is constructed to handle this unique nongrey governing equation as well as the difficulties caused by singular kernels. Example problems are solved and the method's accuracy and computational speed are analyzed.

  5. Star Identification Without Attitude Knowledge: Testing with X-Ray Timing Experiment Data

    NASA Technical Reports Server (NTRS)

    Ketchum, Eleanor

    1997-01-01

    As the budget for the scientific exploration of space shrinks, the need for more autonomous spacecraft increases. For a spacecraft with a star tracker, the ability to determinate attitude from a lost in space state autonomously requires the capability to identify the stars in the field of view of the tracker. Although there have been efforts to produce autonomous star trackers which perform this function internally, many programs cannot afford these sensors. The author previously presented a method for identifying stars without a priori attitude knowledge specifically targeted for onboard computers as it minimizes the necessary computer storage. The method has previously been tested with simulated data. This paper provides results of star identification without a priori attitude knowledge using flight data from two 8 by 8 degree charge coupled device star trackers onboard the X-Ray Timing Experiment.

  6. The impact of initialization procedures on unsupervised unmixing of hyperspectral imagery using the constrained positive matrix factorization

    NASA Astrophysics Data System (ADS)

    Masalmah, Yahya M.; Vélez-Reyes, Miguel

    2007-04-01

    The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.

  7. The Complete, Temperature Resolved Spectrum of Methyl Cyanide Between 200 and 277 GHZ

    NASA Astrophysics Data System (ADS)

    McMillan, James P.; Neese, Christopher F.; De Lucia, Frank C.

    2016-06-01

    We have studied methyl cyanide, one of the so-called 'astronomical weeds', in the 200--277 GHz band. We have experimentally gathered a set of intensity calibrated, complete, and temperature resolved spectra from across the temperature range of 231--351 K. Using our previously reported method of analysis, the point by point method, we are capable of generating the complete spectrum at astronomically significant temperatures. Lines, of nontrivial intensity, which were previously not included in the available astrophysical catalogs have been found. Lower state energies and line strengths have been found for a number of lines which are not currently present in the catalogs. The extent to which this may be useful in making assignments will be discussed. J. McMillan, S. Fortman, C. Neese, F. DeLucia, ApJ. 795, 56 (2014)

  8. Dimethyl Ether Between 214.6 and 265.3 Ghz: the Complete, Temperature Resolved Spectrum

    NASA Astrophysics Data System (ADS)

    McMillan, James P.; Neese, Christopher F.; De Lucia, Frank C.

    2017-06-01

    We have studied dimethyl ether, one of the so-called 'astronomical weeds', in the 214.6-265.3 GHz band. We have experimentally gathered a set of intensity calibrated, complete, and temperature resolved spectra from across the temperature range of 238-391 K. Using our previously reported method of analysis, the point by point method, we are capable of generating the complete spectrum at astronomically significant temperatures. Many lines, of nontrivial intensity, which were previously not included in the available astrophysical catalogs have been found. Lower state energies and line strengths have been found for a number of lines which are not currently present in the catalogs. The extent to which this may be useful in making assignments will be discussed. J. McMillan, S. Fortman, C. Neese, F. DeLucia, ApJ. 795, 56 (2014)

  9. Redefining thermal regimes to design reserves for coral reefs in the face of climate change.

    PubMed

    Chollett, Iliana; Enríquez, Susana; Mumby, Peter J

    2014-01-01

    Reef managers cannot fight global warming through mitigation at local scale, but they can use information on thermal patterns to plan for reserve networks that maximize the probability of persistence of their reef system. Here we assess previous methods for the design of reserves for climate change and present a new approach to prioritize areas for conservation that leverages the most desirable properties of previous approaches. The new method moves the science of reserve design for climate change a step forwards by: (1) recognizing the role of seasonal acclimation in increasing the limits of environmental tolerance of corals and ameliorating the bleaching response; (2) using the best proxy for acclimatization currently available; (3) including information from several bleaching events, which frequency is likely to increase in the future; (4) assessing relevant variability at country scales, where most management plans are carried out. We demonstrate the method in Honduras, where a reassessment of the marine spatial plan is in progress.

  10. Redefining Thermal Regimes to Design Reserves for Coral Reefs in the Face of Climate Change

    PubMed Central

    Chollett, Iliana; Enríquez, Susana; Mumby, Peter J.

    2014-01-01

    Reef managers cannot fight global warming through mitigation at local scale, but they can use information on thermal patterns to plan for reserve networks that maximize the probability of persistence of their reef system. Here we assess previous methods for the design of reserves for climate change and present a new approach to prioritize areas for conservation that leverages the most desirable properties of previous approaches. The new method moves the science of reserve design for climate change a step forwards by: (1) recognizing the role of seasonal acclimation in increasing the limits of environmental tolerance of corals and ameliorating the bleaching response; (2) using the best proxy for acclimatization currently available; (3) including information from several bleaching events, which frequency is likely to increase in the future; (4) assessing relevant variability at country scales, where most management plans are carried out. We demonstrate the method in Honduras, where a reassessment of the marine spatial plan is in progress. PMID:25333380

  11. Independent and combined analyses of sequences from all three genomic compartments converge on the root of flowering plant phylogeny

    PubMed Central

    Barkman, Todd J.; Chenery, Gordon; McNeal, Joel R.; Lyons-Weiler, James; Ellisens, Wayne J.; Moore, Gerry; Wolfe, Andrea D.; dePamphilis, Claude W.

    2000-01-01

    Plant phylogenetic estimates are most likely to be reliable when congruent evidence is obtained independently from the mitochondrial, plastid, and nuclear genomes with all methods of analysis. Here, results are presented from separate and combined genomic analyses of new and previously published data, including six and nine genes (8,911 bp and 12,010 bp, respectively) for different subsets of taxa that suggest Amborella + Nymphaeales (water lilies) are the first-branching angiosperm lineage. Before and after tree-independent noise reduction, most individual genomic compartments and methods of analysis estimated the Amborella + Nymphaeales basal topology with high support. Previous phylogenetic estimates placing Amborella alone as the first extant angiosperm branch may have been misled because of a series of specific problems with paralogy, suboptimal outgroups, long-branch taxa, and method dependence. Ancestral character state reconstructions differ between the two topologies and affect inferences about the features of early angiosperms. PMID:11069280

  12. A test method for determining adhesion forces and Hamaker constants of cementitious materials using atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lomboy, Gilson; Sundararajan, Sriram, E-mail: srirams@iastate.edu; Wang Kejin

    2011-11-15

    A method for determining Hamaker constant of cementitious materials is presented. The method involved sample preparation, measurement of adhesion force between the tested material and a silicon nitride probe using atomic force microscopy in dry air and in water, and calculating the Hamaker constant using appropriate contact mechanics models. The work of adhesion and Hamaker constant were computed from the pull-off forces using the Johnson-Kendall-Roberts and Derjagin-Muller-Toropov models. Reference materials with known Hamaker constants (mica, silica, calcite) and commercially available cementitious materials (Portland cement (PC), ground granulated blast furnace slag (GGBFS)) were studied. The Hamaker constants of the reference materialsmore » obtained are consistent with those published by previous researchers. The results indicate that PC has a higher Hamaker constant than GGBFS. The Hamaker constant of PC in water is close to the previously predicted value C{sub 3}S, which is attributed to short hydration time ({<=} 45 min) used in this study.« less

  13. An improved, robust, axial line singularity method for bodies of revolution

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.

    1989-01-01

    The failures encountered in attempts to increase the range of applicability of the axial line singularity method for representing incompressible, inviscid flow about an inclined and slender body-of-revolution are presently noted to be common to all efforts to solve Fredholm equations of the first kind. It is shown that a previously developed smoothing technique yields a robust method for numerical solution of the governing equations; this technique is easily retrofitted to existing codes, and allows the number of circularities to be increased until the most accurate line singularity solution is obtained.

  14. Extracting land use information from the earth resources technology satellite data by conventional interpretation methods

    NASA Technical Reports Server (NTRS)

    Vegas, P. L.

    1974-01-01

    A procedure for obtaining land use data from satellite imagery by the use of conventional interpretation methods is presented. The satellite is described briefly, and the advantages of various scales and multispectral scanner bands are discussed. Methods for obtaining satellite imagery and the sources of this imagery are given. Equipment used in the study is described, and samples of land use maps derived from satellite imagery are included together with the land use classification system used. Accuracy percentages are cited and are compared to those of a previous experiment using small scale aerial photography.

  15. Simple method for assembly of CRISPR synergistic activation mediator gRNA expression array.

    PubMed

    Vad-Nielsen, Johan; Nielsen, Anders Lade; Luo, Yonglun

    2018-05-20

    When studying complex interconnected regulatory networks, effective methods for simultaneously manipulating multiple genes expression are paramount. Previously, we have developed a simple method for generation of an all-in-one CRISPR gRNA expression array. We here present a Golden Gate Assembly-based system of synergistic activation mediator (SAM) compatible CRISPR/dCas9 gRNA expression array for the simultaneous activation of multiple genes. Using this system, we demonstrated the simultaneous activation of the transcription factors, TWIST, SNAIL, SLUG, and ZEB1 a human breast cancer cell line. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. A vortex-lattice method for the mean camber shapes of trimmed noncoplanar planforms with minimum vortex drag

    NASA Technical Reports Server (NTRS)

    Lamar, J. E.

    1976-01-01

    A new subsonic method has been developed by which the mean camber surface can be determined for trimmed noncoplanar planforms with minimum vortex drag. This method uses a vortex lattice and overcomes previous difficulties with chord loading specification. A Trefftz plane analysis is utilized to determine the optimum span loading for minimum drag, then solved for the mean camber surface of the wing, which provides the required loading. Sensitivity studies, comparisons with other theories, and applications to configurations which include a tandem wing and a wing winglet combination have been made and are presented.

  17. Generalized method calculating the effective diffusion coefficient in periodic channels.

    PubMed

    Kalinay, Pavol

    2015-01-07

    The method calculating the effective diffusion coefficient in an arbitrary periodic two-dimensional channel, presented in our previous paper [P. Kalinay, J. Chem. Phys. 141, 144101 (2014)], is generalized to 3D channels of cylindrical symmetry, as well as to 2D or 3D channels with particles driven by a constant longitudinal external driving force. The next possible extensions are also indicated. The former calculation was based on calculus in the complex plane, suitable for the stationary diffusion in 2D domains. The method is reformulated here using standard tools of functional analysis, enabling the generalization.

  18. Ultrasonic isolation of the outer membrane of Escherichia coli with autodisplayed Z-domains.

    PubMed

    Bong, Ji-Hong; Yoo, Gu; Park, Min; Kang, Min-Jung; Jose, Joachim; Pyun, Jae-Chul

    2014-11-01

    The outer membrane of Escherichia coli was previously isolated as a liposome-like outer membrane particle using an enzymatic treatment for lysozymes; for immunoassays, the particles were subsequently layered on solid supports via hydrophobic interactions. This work presents an enzyme-free isolation method for the E. coli outer membrane with autodisplayed Z-domains using ultrasonication. First, the properties of the outer membrane particle, such as the particle size, zeta potential, and total protein, were compared with the properties of particles obtained using the previous preparation methods. Compared with the conventional isolation method using an enzyme treatment, the ultrasonic method exhibited a higher efficiency at isolating the outer membrane and less contamination by cytosolic proteins. The isolated outer membrane particles were layered on a gold surface, and the roughness and thickness of the layered outer membrane layers were subsequently analyzed using AFM analysis. Finally, the antibody-binding activity of two outer membrane layers with autodisplayed Z-domains created from particles that were isolated using the enzymatic and ultrasonic isolation methods was measured using fluorescein-labeled antibody as a model analyte, and the activity of the outer membrane layer that was isolated from the ultrasonic method was estimated to be more than 20% higher than that from the conventional enzymatic method. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. A finite-volume Eulerian-Lagrangian Localized Adjoint Method for solution of the advection-dispersion equation

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1993-01-01

    A new mass-conservative method for solution of the one-dimensional advection-dispersion equation is derived and discussed. Test results demonstrate that the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) outperforms standard finite-difference methods, in terms of accuracy and efficiency, for solute transport problems that are dominated by advection. For dispersion-dominated problems, the performance of the method is similar to that of standard methods. Like previous ELLAM formulations, FVELLAM systematically conserves mass globally with all types of boundary conditions. FVELLAM differs from other ELLAM approaches in that integrated finite differences, instead of finite elements, are used to approximate the governing equation. This approach, in conjunction with a forward tracking scheme, greatly facilitates mass conservation. The mass storage integral is numerically evaluated at the current time level, and quadrature points are then tracked forward in time to the next level. Forward tracking permits straightforward treatment of inflow boundaries, thus avoiding the inherent problem in backtracking, as used by most characteristic methods, of characteristic lines intersecting inflow boundaries. FVELLAM extends previous ELLAM results by obtaining mass conservation locally on Lagrangian space-time elements. Details of the integration, tracking, and boundary algorithms are presented. Test results are given for problems in Cartesian and radial coordinates.

  20. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor

    PubMed Central

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-01-01

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies. PMID:29695113

  1. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor.

    PubMed

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-04-24

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.

  2. Adaptive enhanced sampling by force-biasing using neural networks

    NASA Astrophysics Data System (ADS)

    Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.

    2018-04-01

    A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.

  3. Passive wireless strain monitoring of tire using capacitance change

    NASA Astrophysics Data System (ADS)

    Matsuzaki, Ryosuke; Todoroki, Akira

    2004-07-01

    In-service strain monitoring of tires of automobile is quite effective for improving the reliability of tires and Anti-lock Braking System (ABS). Since conventional strain gages have high stiffness and require lead wires, the conventional strain gages are cumbersome for the strain measurements of the tires. In a previous study, the authors proposed a new wireless strain monitoring method that adopts the tire itself as a sensor, with an oscillating circuit. This method is very simple and useful, but it requires a battery to activate the oscillating circuit. In the present study, the previous method for wireless tire monitoring is improved to produce a passive wireless sensor. A specimen made from a commercially available tire is connected to a tuning circuit comprising an inductance and a capacitance as a condenser. The capacitance change of tire causes change of the tuning frequency. This change of the tuned radio wave enables us to measure the applied strain of the specimen wirelessly, without any power supply from outside. This new passive wireless method is applied to a specimen and the static applied strain is measured. As a result, the method is experimentally shown to be effective as a passive wireless strain monitoring of tires.

  4. English semantic word-pair norms and a searchable Web portal for experimental stimulus creation.

    PubMed

    Buchanan, Erin M; Holmes, Jessica L; Teasley, Marilee L; Hutchison, Keith A

    2013-09-01

    As researchers explore the complexity of memory and language hierarchies, the need to expand normed stimulus databases is growing. Therefore, we present 1,808 words, paired with their features and concept-concept information, that were collected using previously established norming methods (McRae, Cree, Seidenberg, & McNorgan Behavior Research Methods 37:547-559, 2005). This database supplements existing stimuli and complements the Semantic Priming Project (Hutchison, Balota, Cortese, Neely, Niemeyer, Bengson, & Cohen-Shikora 2010). The data set includes many types of words (including nouns, verbs, adjectives, etc.), expanding on previous collections of nouns and verbs (Vinson & Vigliocco Journal of Neurolinguistics 15:317-351, 2008). We describe the relation between our and other semantic norms, as well as giving a short review of word-pair norms. The stimuli are provided in conjunction with a searchable Web portal that allows researchers to create a set of experimental stimuli without prior programming knowledge. When researchers use this new database in tandem with previous norming efforts, precise stimuli sets can be created for future research endeavors.

  5. Health research participants' preferences for receiving research results.

    PubMed

    Long, Christopher R; Stewart, M Kathryn; Cunningham, Thomas V; Warmack, T Scott; McElfish, Pearl A

    2016-12-01

    Participants in health research studies typically express interest in receiving the results from the studies in which they participate. However, participants' preferences and experiences related to receiving the results are not well understood. In general, the existing studies have had relatively small sample sizes and typically address specific and often sensitive issues within targeted populations. This study used an online survey to explore attitudes and experiences of registrants in ResearchMatch, a large database of past, present, and potential health research participants. Survey respondents provided information related to whether or not they received research results from studies in which they participated, the methods used to communicate the results, their satisfaction with the results, and when and how they would like to receive research results from future studies. In all, 70,699 ResearchMatch registrants were notified of the study's topic. Of the 5207 registrants who requested full information about the study, 3381 respondents completed the survey. Approximately 33% of respondents with previous health research participation reported receiving the results. Approximately half of respondents with previous research participation reported no opportunity to request the results. However, almost all respondents said researchers should always or sometimes offer the results to participants. Respondents expressed particular interest in the results related to their (or a loved one's) health, as well as information about studies' purposes and any medical advances based on the results. In general, respondents' most preferred dissemination methods for the results were email and website postings. The least desirable dissemination methods for the results included Twitter, conference calls, and text messages. Across all the results, we compare the responses of respondents with and without previous research participation experience and those who have worked in research organizations versus those who have not. Compared to respondents who have previous participation experience, a greater proportion of respondents with no participation experience indicated that the results should always be shared with participants. Likewise, respondents with no participation experience placed higher importance on the receipt of each type of results' information included in the survey. We present findings from a survey assessing attitudes and experiences of a broad sample of respondents that addresses gaps in knowledge related to participants' preferences for receiving the results. The study's findings highlight the potential for inconsistency between respondents' expressed preferences to receive specific types of results via specific methods and researchers' unwillingness or inability to provide them. We present specific recommendations to shift the approach of new studies to investigate participants' preferences for receiving research results. © The Author(s) 2016.

  6. Deterministic photon bias in speckle imaging

    NASA Technical Reports Server (NTRS)

    Beletic, James W.

    1989-01-01

    A method for determining photo bias terms in speckle imaging is presented, and photon bias is shown to be a deterministic quantity that can be calculated without the use of the expectation operator. The quantities obtained are found to be identical to previous results. The present results have extended photon bias calculations to the important case of the bispectrum where photon events are assigned different weights, in which regime the bias is a frequency dependent complex quantity that must be calculated for each frame.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonior, Jason D; Hu, Zhen; Guo, Terry N.

    This letter presents an experimental demonstration of software-defined-radio-based wireless tomography using computer-hosted radio devices called Universal Software Radio Peripheral (USRP). This experimental brief follows our vision and previous theoretical study of wireless tomography that combines wireless communication and RF tomography to provide a novel approach to remote sensing. Automatic data acquisition is performed inside an RF anechoic chamber. Semidefinite relaxation is used for phase retrieval, and the Born iterative method is utilized for imaging the target. Experimental results are presented, validating our vision of wireless tomography.

  8. Hereditary hemorrhagic telangiectasia patient presenting with brain abscess due to silent pulmonary arteriovenous malformation.

    PubMed

    Themistocleous, Marios; Giakoumettis, Dimitrios; Mitsios, Andreas; Anagnostopoulos, Christos; Kalyvas, Aristoteles; Koutsarnakis, Christos

    2016-01-01

    Hereditary hemorrhagic telangiectasia is a rare autosomal dominant inherited disease that is usually complicated by visceral vascular malformations. Patients harboring such malformations are at increased risk of brain abscess formation, which despite advances in diagnostic and surgical methods remains a life threatening medical emergency with high mortality and morbidity rates. In the present report we describe a case of cerebral abscess due to silent pulmonary arteriovenous malformation (AVM) in a young patient previously undiagnosed for hereditary hemorrhagic telangiectasia syndrome (HHT).

  9. Complementary construction of ideal nonimaging concentrators and its applications.

    PubMed

    Gordon, J M

    1996-10-01

    A construction principle for ideal nonimaging concentrators based on the complementary edge rays outside the nominal field of view is presented, with illustrations for the trumpet, compound parabolic concentrator, and compound hyperbolic concentrator. A simple string construction for the trumpet concentrator is shown to follow from this observation-the trumpet having been the one ideal concentrator for which no string-construction method had previously been noted. An application of these observations for solar concentrator design when nonisothermal receivers are advantageous is also presented.

  10. Multi-level trellis coded modulation and multi-stage decoding

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  11. Is the Recall of Verbal-Spatial Information from Working Memory Affected by Symptoms of ADHD?

    ERIC Educational Resources Information Center

    Caterino, Linda C.; Verdi, Michael P.

    2012-01-01

    Objective: The Kulhavy model for text learning using organized spatial displays proposes that learning will be increased when participants view visual images prior to related text. In contrast to previous studies, this study also included students who exhibited symptoms of ADHD. Method: Participants were presented with either a map-text or…

  12. (Re)presenting Equestrian "His"tories--Storytelling as a Method of Inquiry

    ERIC Educational Resources Information Center

    Linghede, Eva; Larsson, Håkan; Redelius, Karin

    2016-01-01

    Responding to calls about the need to "give voice" to groups previously marginalized in research and to challenge meta-narratives about men in sports, this paper explores the use of a narrative approach to illuminate men's experiences--and the doing of gender--within equestrian sports, a sport dominated by women in Sweden. Adopting the…

  13. The Magneto-Kinematic Effect for the Case of Rectilinear Motion

    ERIC Educational Resources Information Center

    Taylor, Stephen; Leus, Vladimir

    2012-01-01

    The magneto-kinematic effect has been previously observed for the case of a rotating permanent magnet. Using this effect, this paper presents a novel method for calculation of the induced electromotive force (EMF) in a conductor for the case of rectilinear motion of a 25.4 mm diameter permanently magnetized sphere (magnetic dipole) past the…

  14. Ground-state energy of HeH+

    NASA Astrophysics Data System (ADS)

    Zhou, Bing-Lu; Zhu, Jiong-Ming; Yan, Zong-Chao

    2006-06-01

    The nonrelativistic ground-state energy of He4H+ is calculated using a variational method in Hylleraas coordinates. Convergence to a few parts in 1010 is achieved, which improves the best previous result of Pavanello [J. Chem. Phys. 123, 104306 (2005)]. Expectation values of the interparticle distances are evaluated. Similar results for He3H+ are also presented.

  15. Factors Influencing the Effectiveness of Note Taking on Computer-Based Graphic Organizers

    ERIC Educational Resources Information Center

    Crooks, Steven M.; White, David R.; Barnard, Lucy

    2007-01-01

    Previous research on graphic organizer (GO) note taking has shown that this method is most effective when the GO is presented to the student partially complete with provided notes. This study extended prior research by investigating the effects of provided note type (summary vs. verbatim) and GO bite size (large vs. small) on the transfer…

  16. The Use of Molecular Modeling as "Pseudoexperimental" Data for Teaching VSEPR as a Hands-On General Chemistry Activity

    ERIC Educational Resources Information Center

    Martin, Christopher B.; Vandehoef, Crissie; Cook, Allison

    2015-01-01

    A hands-on activity appropriate for first-semester general chemistry students is presented that combines traditional VSEPR methods of predicting molecular geometries with introductory use of molecular modeling. Students analyze a series of previously calculated output files consisting of several molecules each in various geometries. Each structure…

  17. The Jigsaw Technique and Self-Efficacy of Vocational Training Students: A Practice Report

    ERIC Educational Resources Information Center

    Darnon, Celine; Buchs, Celine; Desbar, Delphine

    2012-01-01

    Can teenagers' self-efficacy be improved in a short time? Previous research has shown the positive effect of cooperative learning methods, including "jigsaw classrooms" (Aronson and Patnoe, 1997), on various outcomes (e.g., the liking of school, self-esteem, and reduction of prejudices). The present practice report investigated the effects of…

  18. Long-Term Experiences in Cash and Counseling for Young Adults with Intellectual Disabilities: Familial Programme Representative Descriptions

    ERIC Educational Resources Information Center

    Harry, Melissa L.; MacDonald, Lynn; McLuckie, Althea; Battista, Christina; Mahoney, Ellen K.; Mahoney, Kevin J.

    2017-01-01

    Background: Our aim was to explore previously unknown long-term outcomes of self-directed personal care services for young adults with intellectual disabilities and limitations in activities of daily living. Materials and Methods: The present authors utilized participatory action research and qualitative content analysis in interviewing 11 unpaid…

  19. Compact Method for Modeling and Simulation of Memristor Devices

    DTIC Science & Technology

    2011-08-01

    single-valued equations. 15. SUBJECT TERMS Memristor, Neuromorphic , Cognitive, Computing, Memory, Emerging Technology, Computational Intelligence 16...resistance state depends on its previous state and present electrical biasing conditions, and when combined with transistors in a hybrid chip ...computers, reconfigurable electronics and neuromorphic computing [3,4]. According to Chua [4], the memristor behaves like a linear resistor with

  20. Conflicting Pathways to Participation in the FL Classroom: L2 Speech Production vs. L2 Thought Processes

    ERIC Educational Resources Information Center

    Bernales, Carolina

    2016-01-01

    Previous research on foreign language classroom participation has shown that oral production has a privileged status compared to less salient forms of participation, such as mental involvement and engagement in class activities. This mixed-methods study presents an alternative look at classroom participation by investigating the relationship…

  1. Childhood Sexual Abuse, Attachment, and Trauma Symptoms in College Females: The Moderating Role of Attachment

    ERIC Educational Resources Information Center

    Aspelmeier, Jeffery E.; Elliott, Ann N.; Smith, Christopher H.

    2007-01-01

    Objective: The present study tests a model linking attachment, childhood sexual abuse (CSA), and adult psychological functioning. It expands on previous work by assessing the degree to which attachment security moderates the relationship between a history of child sexual abuse and trauma-related symptoms in college females. Method: Self-reports of…

  2. Genetic engineering of syringyl-enriched lignin in plants

    DOEpatents

    Chiang, Vincent Lee; Li, Laigeng

    2004-11-02

    The present invention relates to a novel DNA sequence, which encodes a previously unidentified lignin biosynthetic pathway enzyme, sinapyl alcohol dehydrogenase (SAD) that regulates the biosynthesis of syringyl lignin in plants. Also provided are methods for incorporating this novel SAD gene sequence or substantially similar sequences into a plant genome for genetic engineering of syringyl-enriched lignin in plants.

  3. Methodological Considerations in On-Line Contingent Research and Implications for Learning. Technical Report.

    ERIC Educational Resources Information Center

    Whittington, Marna C.

    Methods for the implementation of on-line contingent research are described in this study. In a contingent experimentation procedure, the content of successive experimental trials is a function of a subject's responses to a previous trial or trials (in contrast to traditional experimentation in which the subject is presented a previously…

  4. Using Evaluation To Build Organizational Performance and Learning Capability: A Strategy and a Method.

    ERIC Educational Resources Information Center

    Brinkerhoff, Robert O.; Dressler, Dennis

    2002-01-01

    Discusses the causes of variability of training impact and problems with previous models for evaluation of training. Presents the Success Case Evaluation approach as a way to measure the impact of training and build learning capability to increase the business value of training by focusing on a small number of trainees. (Author/LRW)

  5. Validation of a Cognitive Diagnostic Model across Multiple Forms of a Reading Comprehension Assessment

    ERIC Educational Resources Information Center

    Clark, Amy K.

    2013-01-01

    The present study sought to fit a cognitive diagnostic model (CDM) across multiple forms of a passage-based reading comprehension assessment using the attribute hierarchy method. Previous research on CDMs for reading comprehension assessments served as a basis for the attributes in the hierarchy. The two attribute hierarchies were fit to data from…

  6. The Effect of Tomatis Therapy on Children with Autism: Eleven Case Studies

    ERIC Educational Resources Information Center

    Gerritsen, Jan

    2010-01-01

    This article presents a reanalysis of a previously reported study on the impact of the Tomatis Method of auditory stimulation on subjects with autism. When analyzed as individual case studies, the data showed that six of the 11 subjects with autism demonstrated significant improvement from 90 hours of Tomatis Therapy. Five subjects did not benefit…

  7. An improved cellular automaton method to model multispecies biofilms.

    PubMed

    Tang, Youneng; Valocchi, Albert J

    2013-10-01

    Biomass-spreading rules used in previous cellular automaton methods to simulate multispecies biofilm introduced extensive mixing between different biomass species or resulted in spatially discontinuous biomass concentration and distribution; this caused results based on the cellular automaton methods to deviate from experimental results and those from the more computationally intensive continuous method. To overcome the problems, we propose new biomass-spreading rules in this work: Excess biomass spreads by pushing a line of grid cells that are on the shortest path from the source grid cell to the destination grid cell, and the fractions of different biomass species in the grid cells on the path change due to the spreading. To evaluate the new rules, three two-dimensional simulation examples are used to compare the biomass distribution computed using the continuous method and three cellular automaton methods, one based on the new rules and the other two based on rules presented in two previous studies. The relationship between the biomass species is syntrophic in one example and competitive in the other two examples. Simulation results generated using the cellular automaton method based on the new rules agree much better with the continuous method than do results using the other two cellular automaton methods. The new biomass-spreading rules are no more complex to implement than the existing rules. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Qualitative research and the profound grasp of the obvious.

    PubMed Central

    Hurley, R E

    1999-01-01

    OBJECTIVE: To discuss the value of promoting coexistent and complementary relationships between qualitative and quantitative research methods as illustrated by presentations made by four respected health services researchers who described their experiences in multi-method projects. DATA SOURCES: Presentations and publications related to the four research projects, which described key substantive and methodological areas that had been addressed with qualitative techniques. PRINCIPAL FINDINGS: Sponsor interest in timely, insightful, and reality-anchored evidence has provided a strong base of support for the incorporation of qualitative methods into major contemporary policy research studies. In addition, many issues may be suitable for study only with qualitative methods because of their complexity, their emergent nature, or because of the need to revisit and reexamine previously untested assumptions. CONCLUSION: Experiences from the four projects, as well as from other recent health services studies with major qualitative components, support the assertion that the interests of sponsors in the policy realm and pressure from them suppress some of the traditional tensions and antagonisms between qualitative and quantitative methods. PMID:10591276

  9. A study of methods to estimate debris flow velocity

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.

  10. An IMU-to-Body Alignment Method Applied to Human Gait Analysis.

    PubMed

    Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo

    2016-12-10

    This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  11. Integral method for transient He II heat transfer in a semi-infinite domain

    NASA Astrophysics Data System (ADS)

    Baudouy, B.

    2002-05-01

    Integral methods are suited to solve a non-linear system of differential equations where the non-linearity can be found either in the differential equations or in the boundary conditions. Though they are approximate methods, they have proven to give simple solutions with acceptable accuracy for transient heat transfer in He II. Taking in account the temperature dependence of thermal properties, direct solutions are found without the need of adjusting a parameter. Previously, we have presented a solution for the clamped heat flux and in the present study this method is used to accommodate the clamped-temperature problem. In the case of constant thermal properties, this method yields results that are within a few percent of the exact solution for the heat flux at the axis origin. We applied this solution to analyze recovery from burnout and find an agreement within 10% at low heat flux, whereas at high heat flux the model deviates from the experimental data suggesting the need for a more refined thermal model.

  12. Generic Safety Requirements for Developing Safe Insulin Pump Software

    PubMed Central

    Zhang, Yi; Jetley, Raoul; Jones, Paul L; Ray, Arnab

    2011-01-01

    Background The authors previously introduced a highly abstract generic insulin infusion pump (GIIP) model that identified common features and hazards shared by most insulin pumps on the market. The aim of this article is to extend our previous work on the GIIP model by articulating safety requirements that address the identified GIIP hazards. These safety requirements can be validated by manufacturers, and may ultimately serve as a safety reference for insulin pump software. Together, these two publications can serve as a basis for discussing insulin pump safety in the diabetes community. Methods In our previous work, we established a generic insulin pump architecture that abstracts functions common to many insulin pumps currently on the market and near-future pump designs. We then carried out a preliminary hazard analysis based on this architecture that included consultations with many domain experts. Further consultation with domain experts resulted in the safety requirements used in the modeling work presented in this article. Results Generic safety requirements for the GIIP model are presented, as appropriate, in parameterized format to accommodate clinical practices or specific insulin pump criteria important to safe device performance. Conclusions We believe that there is considerable value in having the diabetes, academic, and manufacturing communities consider and discuss these generic safety requirements. We hope that the communities will extend and revise them, make them more representative and comprehensive, experiment with them, and use them as a means for assessing the safety of insulin pump software designs. One potential use of these requirements is to integrate them into model-based engineering (MBE) software development methods. We believe, based on our experiences, that implementing safety requirements using MBE methods holds promise in reducing design/implementation flaws in insulin pump development and evolutionary processes, therefore improving overall safety of insulin pump software. PMID:22226258

  13. Transport Coefficients for the NASA Lewis Chemical Equilibrium Program

    NASA Technical Reports Server (NTRS)

    Svehla, Roger A.

    1995-01-01

    The new transport property data that will be used in the NASA Lewis Research Center's Chemical Equilibrium and Applications Program (CEA) is presented. It complements a previous publication that documented the thermodynamic and transport property data then in use. Sources of the data and a brief description of the method by which the data were obtained are given. Coefficients to calculate the viscosity, thermal conductivity, and binary interactions are given for either one, or usually, two temperature intervals, typically 300 to 1000 K and 1000 to 5000 K. The form of the transport equation is the same as used previously. The number of species was reduced from the previous database. Many species for which the data were estimated were eliminated from the database. Some ionneutral interactions were added.

  14. Advances in shock timing experiments on the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Robey, H. F.; Celliers, P. M.; Moody, J. D.; Sater, J.; Parham, T.; Kozioziemski, B.; Dylla-Spears, R.; Ross, J. S.; LePape, S.; Ralph, J. E.; Hohenberger, M.; Dewald, E. L.; Berzak Hopkins, L.; Kroll, J. J.; Yoxall, B. E.; Hamza, A. V.; Boehly, T. R.; Nikroo, A.; Landen, O. L.; Edwards, M. J.

    2016-03-01

    Recent advances in shock timing experiments and analysis techniques now enable shock measurements to be performed in cryogenic deuterium-tritium (DT) ice layered capsule implosions on the National Ignition Facility (NIF). Previous measurements of shock timing in inertial confinement fusion (ICF) implosions were performed in surrogate targets, where the solid DT ice shell and central DT gas were replaced with a continuous liquid deuterium (D2) fill. These previous experiments pose two surrogacy issues: a material surrogacy due to the difference of species (D2 vs. DT) and densities of the materials used and a geometric surrogacy due to presence of an additional interface (ice/gas) previously absent in the liquid-filled targets. This report presents experimental data and a new analysis method for validating the assumptions underlying this surrogate technique.

  15. Can PC-9 Zhong chong replace K-1 Yong quan for the acupunctural resuscitation of a bilateral double-amputee? Stating the “random criterion problem” in its statistical analysis

    PubMed Central

    Inchauspe, Adrián Angel

    2016-01-01

    AIM: To present an inclusion criterion for patients who have suffered bilateral amputation in order to be treated with the supplementary resuscitation treatment which is hereby proposed by the author. METHODS: This work is based on a Retrospective Cohort model so that a certainly lethal risk to the control group is avoided. RESULTS: This paper presents a hypothesis on acupunctural PC-9 Zhong chong point, further supported by previous statistical work recorded for the K-1 Yong quan resuscitation point. CONCLUSION: Thanks to the application of the resuscitation maneuver herein proposed on the previously mentioned point, patients with bilateral amputation would have another alternative treatment available in case basic and advanced CPR should fail. PMID:27152257

  16. Generalized trajectory surface hopping method based on the Zhu-Nakamura theory

    NASA Astrophysics Data System (ADS)

    Oloyede, Ponmile; Mil'nikov, Gennady; Nakamura, Hiroki

    2006-04-01

    We present a generalized formulation of the trajectory surface hopping method applicable to a general multidimensional system. The method is based on the Zhu-Nakamura theory of a nonadiabatic transition and therefore includes the treatment of classically forbidden hops. The method uses a generalized recipe for the conservation of angular momentum after forbidden hops and an approximation for determining a nonadiabatic transition direction which is crucial when the coupling vector is unavailable. This method also eliminates the need for a rigorous location of the seam surface, thereby ensuring its applicability to a wide class of chemical systems. In a test calculation, we implement the method for the DH2+ system, and it shows a remarkable agreement with the previous results of C. Zhu, H. Kamisaka, and H. Nakamura, [J. Chem. Phys. 116, 3234 (2002)]. We then apply it to a diatomic-in-molecule model system with a conical intersection, and the results compare well with exact quantum calculations. The successful application to the conical intersection system confirms the possibility of directly extending the present method to an arbitrary potential of general topology.

  17. PrePhyloPro: phylogenetic profile-based prediction of whole proteome linkages

    PubMed Central

    Niu, Yulong; Liu, Chengcheng; Moghimyfiroozabad, Shayan; Yang, Yi

    2017-01-01

    Direct and indirect functional links between proteins as well as their interactions as part of larger protein complexes or common signaling pathways may be predicted by analyzing the correlation of their evolutionary patterns. Based on phylogenetic profiling, here we present a highly scalable and time-efficient computational framework for predicting linkages within the whole human proteome. We have validated this method through analysis of 3,697 human pathways and molecular complexes and a comparison of our results with the prediction outcomes of previously published co-occurrency model-based and normalization methods. Here we also introduce PrePhyloPro, a web-based software that uses our method for accurately predicting proteome-wide linkages. We present data on interactions of human mitochondrial proteins, verifying the performance of this software. PrePhyloPro is freely available at http://prephylopro.org/phyloprofile/. PMID:28875072

  18. Virtual fringe projection system with nonparallel illumination based on iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian

    2017-06-01

    Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.

  19. Image Reconstruction for a Partially Collimated Whole Body PET Scanner

    PubMed Central

    Alessio, Adam M.; Schmitz, Ruth E.; MacDonald, Lawrence R.; Wollenweber, Scott D.; Stearns, Charles W.; Ross, Steven G.; Ganin, Alex; Lewellen, Thomas K.; Kinahan, Paul E.

    2008-01-01

    Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary. PMID:19096731

  20. Image Reconstruction for a Partially Collimated Whole Body PET Scanner.

    PubMed

    Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E

    2008-06-01

    Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.

  1. Evaluation of hierarchical agglomerative cluster analysis methods for discrimination of primary biological aerosol

    NASA Astrophysics Data System (ADS)

    Crawford, I.; Ruske, S.; Topping, D. O.; Gallagher, M. W.

    2015-11-01

    In this paper we present improved methods for discriminating and quantifying primary biological aerosol particles (PBAPs) by applying hierarchical agglomerative cluster analysis to multi-parameter ultraviolet-light-induced fluorescence (UV-LIF) spectrometer data. The methods employed in this study can be applied to data sets in excess of 1 × 106 points on a desktop computer, allowing for each fluorescent particle in a data set to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient data set. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4) where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best-performing methods were applied to the BEACHON-RoMBAS (Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen-Rocky Mountain Biogenic Aerosol Study) ambient data set, where it was found that the z-score and range normalisation methods yield similar results, with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP) where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misattribution due to poor centroid definition and failure to assign particles to a cluster as a result of the subsampling and comparative attribution method employed by WASP. The methods used here allow for the entire fluorescent population of particles to be analysed, yielding an explicit cluster attribution for each particle and improving cluster centroid definition and our capacity to discriminate and quantify PBAP meta-classes compared to previous approaches.

  2. Vibrations of cantilevered circular cylindrical shells Shallow versus deep shell theory

    NASA Technical Reports Server (NTRS)

    Lee, J. K.; Leissa, A. W.; Wang, A. J.

    1983-01-01

    Free vibrations of cantilevered circular cylindrical shells having rectangular planforms are studied in this paper by means of the Ritz method. The deep shell theory of Novozhilov and Goldenveizer is used and compared with the usual shallow shell theory for a wide range of shell parameters. A thorough convergence study is presented along with comparisons to previously published finite element solutions and experimental results. Accurately computed frequency parameters and mode shapes for various shell configurations are presented. The present paper appears to be the first comprehensive study presenting rigorous comparisons between the two shell theories in dealing with free vibrations of cantilevered cylindrical shells.

  3. Subliminal unconscious conflict alpha power inhibits supraliminal conscious symptom experience.

    PubMed

    Shevrin, Howard; Snodgrass, Michael; Brakel, Linda A W; Kushwaha, Ramesh; Kalaida, Natalia L; Bazan, Ariane

    2013-01-01

    Our approach is based on a tri-partite method of integrating psychodynamic hypotheses, cognitive subliminal processes, and psychophysiological alpha power measures. We present ten social phobic subjects with three individually selected groups of words representing unconscious conflict, conscious symptom experience, and Osgood Semantic negative valence words used as a control word group. The unconscious conflict and conscious symptom words, presented subliminally and supraliminally, act as primes preceding the conscious symptom and control words presented as supraliminal targets. With alpha power as a marker of inhibitory brain activity, we show that unconscious conflict primes, only when presented subliminally, have a unique inhibitory effect on conscious symptom targets. This effect is absent when the unconscious conflict primes are presented supraliminally, or when the target is the control words. Unconscious conflict prime effects were found to correlate with a measure of repressiveness in a similar previous study (Shevrin et al., 1992, 1996). Conscious symptom primes have no inhibitory effect when presented subliminally. Inhibitory effects with conscious symptom primes are present, but only when the primes are supraliminal, and they did not correlate with repressiveness in a previous study (Shevrin et al., 1992, 1996). We conclude that while the inhibition following supraliminal conscious symptom primes is due to conscious threat bias, the inhibition following subliminal unconscious conflict primes provides a neurological blueprint for dynamic repression: it is only activated subliminally by an individual's unconscious conflict and has an inhibitory effect specific only to the conscious symptom. These novel findings constitute neuroscientific evidence for the psychoanalytic concepts of unconscious conflict and repression, while extending neuroscience theory and methods into the realm of personal, psychological meaning.

  4. Subliminal unconscious conflict alpha power inhibits supraliminal conscious symptom experience

    PubMed Central

    Shevrin, Howard; Snodgrass, Michael; Brakel, Linda A. W.; Kushwaha, Ramesh; Kalaida, Natalia L.; Bazan, Ariane

    2013-01-01

    Our approach is based on a tri-partite method of integrating psychodynamic hypotheses, cognitive subliminal processes, and psychophysiological alpha power measures. We present ten social phobic subjects with three individually selected groups of words representing unconscious conflict, conscious symptom experience, and Osgood Semantic negative valence words used as a control word group. The unconscious conflict and conscious symptom words, presented subliminally and supraliminally, act as primes preceding the conscious symptom and control words presented as supraliminal targets. With alpha power as a marker of inhibitory brain activity, we show that unconscious conflict primes, only when presented subliminally, have a unique inhibitory effect on conscious symptom targets. This effect is absent when the unconscious conflict primes are presented supraliminally, or when the target is the control words. Unconscious conflict prime effects were found to correlate with a measure of repressiveness in a similar previous study (Shevrin et al., 1992, 1996). Conscious symptom primes have no inhibitory effect when presented subliminally. Inhibitory effects with conscious symptom primes are present, but only when the primes are supraliminal, and they did not correlate with repressiveness in a previous study (Shevrin et al., 1992, 1996). We conclude that while the inhibition following supraliminal conscious symptom primes is due to conscious threat bias, the inhibition following subliminal unconscious conflict primes provides a neurological blueprint for dynamic repression: it is only activated subliminally by an individual's unconscious conflict and has an inhibitory effect specific only to the conscious symptom. These novel findings constitute neuroscientific evidence for the psychoanalytic concepts of unconscious conflict and repression, while extending neuroscience theory and methods into the realm of personal, psychological meaning. PMID:24046743

  5. Brachytherapy of prostate cancer after colectomy for colorectal cancer: pilot experience.

    PubMed

    Koutrouvelis, Panos G; Theodorescu, Dan; Katz, Stuart; Lailas, Niko; Hendricks, Fred

    2005-01-01

    We present a method of brachytherapy for prostate cancer using a 3-dimensional stereotactic system and computerized tomography guidance in patients without a rectum due to previous treatment for colorectal cancer. From June 1994 to November 2003 a cohort of 800 patients were treated with brachytherapy for prostate cancer. Four patients had previously been treated for colorectal cancer with 4,500 cGy external beam radiation therapy, abdominoperineal resection and chemotherapy, while 1 underwent abdominoperineal resection alone for ulcerative colitis. Because of previous radiation therapy, these patients were not candidates for salvage external beam radiation therapy or radical prostatectomy and they had no rectum for transrectal ultrasound guided transperineal brachytherapy or cryotherapy. A previously described, 3-dimensional stereotactic system was used for brachytherapy in these patients. The prescribed radiation dose was 120 to 144 Gy with iodine seeds in rapid strand format. Patient followup included clinical examination and serum prostate specific antigen measurement. Average followup was 18.6 months. Four patients had excellent biochemical control, while 1 had biochemical failure. Patients did not experience any gastrointestinal morbidity. One patient had a stricture of the distal ureter, requiring a stent. Three-dimensional computerized tomography guided brachytherapy for prostate cancer in patients with a history of colorectal cancer who have no rectum is a feasible method of treatment.

  6. Computer method for design of acoustic liners for turbofan engines

    NASA Technical Reports Server (NTRS)

    Minner, G. L.; Rice, E. J.

    1976-01-01

    A design package is presented for the specification of acoustic liners for turbofans. An estimate of the noise generation was made based on modifications of existing noise correlations, for which the inputs are basic fan aerodynamic design variables. The method does not predict multiple pure tones. A target attenuation spectrum was calculated which was the difference between the estimated generation spectrum and a flat annoyance-weighted goal attenuated spectrum. The target spectrum was combined with a knowledge of acoustic liner performance as a function of the liner design variables to specify the acoustic design. The liner design method at present is limited to annular duct configurations. The detailed structure of the liner was specified by combining the required impedance (which is a result of the previous step) with a mathematical model relating impedance to the detailed structure. The design procedure was developed for a liner constructed of perforated sheet placed over honeycomb backing cavities. A sample calculation was carried through in order to demonstrate the design procedure, and experimental results presented show good agreement with the calculated results of the method.

  7. A generalised significance test for individual communities in networks.

    PubMed

    Kojaku, Sadamori; Masuda, Naoki

    2018-05-09

    Many empirical networks have community structure, in which nodes are densely interconnected within each community (i.e., a group of nodes) and sparsely across different communities. Like other local and meso-scale structure of networks, communities are generally heterogeneous in various aspects such as the size, density of edges, connectivity to other communities and significance. In the present study, we propose a method to statistically test the significance of individual communities in a given network. Compared to the previous methods, the present algorithm is unique in that it accepts different community-detection algorithms and the corresponding quality function for single communities. The present method requires that a quality of each community can be quantified and that community detection is performed as optimisation of such a quality function summed over the communities. Various community detection algorithms including modularity maximisation and graph partitioning meet this criterion. Our method estimates a distribution of the quality function for randomised networks to calculate a likelihood of each community in the given network. We illustrate our algorithm by synthetic and empirical networks.

  8. Staining Methods for Normal and Regenerative Myelin in the Nervous System.

    PubMed

    Carriel, Víctor; Campos, Antonio; Alaminos, Miguel; Raimondo, Stefania; Geuna, Stefano

    2017-01-01

    Histochemical techniques enable the specific identification of myelin by light microscopy. Here we describe three histochemical methods for the staining of myelin suitable for formalin-fixed and paraffin-embedded materials. The first method is conventional luxol fast blue (LFB) method which stains myelin in blue and Nissl bodies and mast cells in purple. The second method is a LBF-based method called MCOLL, which specifically stains the myelin as well the collagen fibers and cells, giving an integrated overview of the histology and myelin content of the tissue. Finally, we describe the osmium tetroxide method, which consist in the osmication of previously fixed tissues. Osmication is performed prior the embedding of tissues in paraffin giving a permanent positive reaction for myelin as well as other lipids present in the tissue.

  9. A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters

    NASA Technical Reports Server (NTRS)

    Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani

    2013-01-01

    This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.

  10. A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters

    NASA Technical Reports Server (NTRS)

    Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani

    2013-01-01

    This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes, the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.

  11. Topology Optimization using the Level Set and eXtended Finite Element Methods: Theory and Applications

    NASA Astrophysics Data System (ADS)

    Villanueva Perez, Carlos Hernan

    Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.

  12. Power Series Approximation for the Correlation Kernel Leading to Kohn-Sham Methods Combining Accuracy, Computational Efficiency, and General Applicability

    NASA Astrophysics Data System (ADS)

    Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas

    2016-09-01

    A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.

  13. Numerical Evaluation of an Ejector-Enhanced Resonant Pulse Combustor with a Poppet Inlet Valve and a Converging Exhaust Nozzle

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Paxson, Daniel E.; Perkins, Hugh D.

    2016-01-01

    A computational investigation of a pressure-gain combustor system for gas turbine applications is presented. The system consists of a valved pulse combustor and an ejector, housed within a shroud. The study focuses on two enhancements to previous models, related to the valve and ejector components. First, a new poppet inlet valve system is investigated, replacing the previously used reed valve configuration. Secondly, a new computational approach to approximating the effects of choked turbine inlet guide vanes present immediately downstream of the Ejector-Enhanced Resonant Pulse Combustor (EERPC) is investigated. Instead of specifying a back pressure at the EERPC exit boundary (as was done in previous studies) the new model adds a converging-diverging (CD) nozzle at the exit of the EERPC. The throat area of the CD nozzle can be adjusted to obtain the desired back pressure level and total mass flow rate. The results presented indicate that the new poppet valve configuration performs nearly as well as the original reed valve system, and that the addition of the CD nozzle is an effective method to approximate the exit boundary effects of a turbine present downstream of the EERPC. Furthermore, it is shown that the more acoustically reflective boundary imposed by a nozzle as compared to a constant pressure surface does not significantly affect operation or performance.

  14. Heuristics for connectivity-based brain parcellation of SMA/pre-SMA through force-directed graph layout.

    PubMed

    Crippa, Alessandro; Cerliani, Leonardo; Nanetti, Luca; Roerdink, Jos B T M

    2011-02-01

    We propose the use of force-directed graph layout as an explorative tool for connectivity-based brain parcellation studies. The method can be used as a heuristic to find the number of clusters intrinsically present in the data (if any) and to investigate their organisation. It provides an intuitive representation of the structure of the data and facilitates interactive exploration of properties of single seed voxels as well as relations among (groups of) voxels. We validate the method on synthetic data sets and we investigate the changes in connectivity in the supplementary motor cortex, a brain region whose parcellation has been previously investigated via connectivity studies. This region is supposed to present two easily distinguishable connectivity patterns, putatively denoted by SMA (supplementary motor area) and pre-SMA. Our method provides insights with respect to the connectivity patterns of the premotor cortex. These present a substantial variation among subjects, and their subdivision into two well-separated clusters is not always straightforward. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. Numerical investigation of velocity slip and temperature jump effects on unsteady flow over a stretching permeable surface

    NASA Astrophysics Data System (ADS)

    Hosseini, E.; Loghmani, G. B.; Heydari, M.; Rashidi, M. M.

    2017-02-01

    In this paper, the boundary layer flow and heat transfer of unsteady flow over a porous accelerating stretching surface in the presence of the velocity slip and temperature jump effects are investigated numerically. A new effective collocation method based on rational Bernstein functions is applied to solve the governing system of nonlinear ordinary differential equations. This method solves the problem on the semi-infinite domain without truncating or transforming it to a finite domain. In addition, the presented method reduces the solution of the problem to the solution of a system of algebraic equations. Graphical and tabular results are presented to investigate the influence of the unsteadiness parameter A , Prandtl number Pr, suction parameter fw, velocity slip parameter γ and thermal slip parameter φ on the velocity and temperature profiles of the fluid. The numerical experiments are reported to show the accuracy and efficiency of the novel proposed computational procedure. Comparisons of present results are made with those obtained by previous works and show excellent agreement.

  16. Self-assembled monolayer and method of making

    DOEpatents

    Fryxell, Glen E [Kennewick, WA; Zemanian, Thomas S [Richland, WA; Liu, Jun [West Richland, WA; Shin, Yongsoon [Richland, WA

    2003-03-11

    According to the present invention, the previously known functional material having a self-assembled monolayer on a substrate has a plurality of assembly molecules each with an assembly atom with a plurality of bonding sites (four sites when silicon is the assembly molecule) wherein a bonding fraction (or fraction) of fully bonded assembly atoms (the plurality of bonding sites bonded to an oxygen atom) has a maximum when made by liquid solution deposition, for example a maximum of 40% when silicon is the assembly molecule, and maximum surface density of assembly molecules was 5 silanes per square nanometer. Note that bonding fraction and surface population are independent parameters. The method of the present invention is an improvement to the known method for making a siloxane layer on a substrate, wherein instead of a liquid phase solution chemistry, the improvement is a supercritical phase chemistry. The present invention has the advantages of greater fraction of oxygen bonds, greater surface density of assembly molecules and reduced time for reaction of about 5 minutes to about 24 hours.

  17. Self-assembled monolayer and method of making

    DOEpatents

    Fryxell, Glen E.; Zemanian, Thomas S.; Liu, Jun; Shin, Yongsoon

    2004-05-11

    According to the present invention, the previously known functional material having a self-assembled monolayer on a substrate has a plurality of assembly molecules each with an assembly atom with a plurality of bonding sites (four sites when silicon is the assembly molecule) wherein a bonding fraction (or fraction) of fully bonded assembly atoms (the plurality of bonding sites bonded to an oxygen atom) has a maximum when made by liquid solution deposition, for example a maximum of 40% when silicon is the assembly molecule, and maximum surface density of assembly molecules was 5 silanes per square nanometer. Note that bonding fraction and surface population are independent parameters. The method of the present invention is an improvement to the known method for making a siloxane layer on a substrate, wherein instead of a liquid phase solution chemistry, the improvement is a supercritical phase chemistry. The present invention has the advantages of greater fraction of oxygen bonds, greater surface density of assembly molecules and reduced time for reaction of about 5 minutes to about 24 hours.

  18. Self-Assembled Monolayer And Method Of Making

    DOEpatents

    Fryxell, Glen E.; Zemanian, Thomas S.; Liu, Jun; Shin, Yongsoon

    2004-06-22

    According to the present invention, the previously known functional material having a self-assembled monolayer on a substrate has a plurality of assembly molecules each with an assembly atom with a plurality of bonding sites (four sites when silicon is the assembly molecule) wherein a bonding fraction (or fraction) of fully bonded assembly atoms (the plurality of bonding sites bonded to an oxygen atom) has a maximum when made by liquid solution deposition, for example a maximum of 40% when silicon is the assembly molecule, and maximum surface density of assembly molecules was 5 silanes per square nanometer. Note that bonding fraction and surface population are independent parameters. The method of the present invention is an improvement to the known method for making a siloxane layer on a substrate, wherein instead of a liquid phase solution chemistry, the improvement is a supercritical phase chemistry. The present invention has the advantages of greater fraction of oxygen bonds, greater surface density of assembly molecules and reduced time for reaction of about 5 minutes to about 24 hours.

  19. Self-Assembled Monolayer And Method Of Making

    DOEpatents

    Fryxell, Glen E.; Zemanian, Thomas S.; Liu, Jun; Shin, Yongsoon

    2005-01-25

    According to the present invention, the previously known functional material having a self-assembled monolayer on a substrate has a plurality of assembly molecules each with an assembly atom with a plurality of bonding sites (four sites when silicon is the assembly molecule) wherein a bonding fraction (or fraction) of fully bonded assembly atoms (the plurality of bonding sites bonded to an oxygen atom) has a maximum when made by liquid solution deposition, for example a maximum of 40% when silicon is the assembly molecule, and maximum surface density of assembly molecules was 5 silanes per square nanometer. Note that bonding fraction and surface population are independent parameters. The method of the present invention is an improvement to the known method for making a siloxane layer on a substrate, wherein instead of a liquid phase solution chemistry, the improvement is a supercritical phase chemistry. The present invention has the advantages of greater fraction of oxygen bonds, greater surface density of assembly molecules and reduced time for reaction of about 5 minutes to about 24 hours.

  20. The Modulus of Rupture from a Mathematical Point of View

    NASA Astrophysics Data System (ADS)

    Quintela, P.; Sánchez, M. T.

    2007-04-01

    The goal of this work is to present a complete mathematical study about the three-point bending experiments and the modulus of rupture of brittle materials. We will present the mathematical model associated to three-point bending experiments and we will use the asymptotic expansion method to obtain a new formula to calculate the modulus of rupture. We will compare the modulus of rupture of porcelain obtained with the previous formula with that obtained by using the classic theoretical formula. Finally, we will also present one and three-dimensional numerical simulations to compute the modulus of rupture.

  1. Online Bagging and Boosting

    NASA Technical Reports Server (NTRS)

    Oza, Nikunji C.

    2005-01-01

    Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by presenting some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time.

  2. Correlation of Cooling Data from an Air-Cooled Cylinder and Several Multicylinder Engines

    NASA Technical Reports Server (NTRS)

    Pinkel, Benjamin; Ellerbrock, Herman H , Jr

    1940-01-01

    The theory of engine-cylinder cooling developed in a previous report was further substantiated by data obtained on a cylinder from a Wright r-1820-g engine. Equations are presented for the average head and barrel temperatures of this cylinder as functions of the engine and the cooling conditions. These equations are utilized to calculate the variation in cylinder temperature with altitude for level flight and climb. A method is presented for correlating average head and barrel temperatures and temperatures at individual points on the head and the barrel obtained on the test stand and in flight. The method is applied to the correlation and the comparison of data obtained on a number of service engines. Data are presented showing the variation of cylinder temperature with time when the power and the cooling pressure drop are suddenly changed.

  3. Presentation accuracy of the web revisited: animation methods in the HTML5 era.

    PubMed

    Garaizar, Pablo; Vadillo, Miguel A; López-de-Ipiña, Diego

    2014-01-01

    Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies.

  4. Practical security and privacy attacks against biometric hashing using sparse recovery

    NASA Astrophysics Data System (ADS)

    Topcu, Berkay; Karabat, Cagatay; Azadmanesh, Matin; Erdogan, Hakan

    2016-12-01

    Biometric hashing is a cancelable biometric verification method that has received research interest recently. This method can be considered as a two-factor authentication method which combines a personal password (or secret key) with a biometric to obtain a secure binary template which is used for authentication. We present novel practical security and privacy attacks against biometric hashing when the attacker is assumed to know the user's password in order to quantify the additional protection due to biometrics when the password is compromised. We present four methods that can reconstruct a biometric feature and/or the image from a hash and one method which can find the closest biometric data (i.e., face image) from a database. Two of the reconstruction methods are based on 1-bit compressed sensing signal reconstruction for which the data acquisition scenario is very similar to biometric hashing. Previous literature introduced simple attack methods, but we show that we can achieve higher level of security threats using compressed sensing recovery techniques. In addition, we present privacy attacks which reconstruct a biometric image which resembles the original image. We quantify the performance of the attacks using detection error tradeoff curves and equal error rates under advanced attack scenarios. We show that conventional biometric hashing methods suffer from high security and privacy leaks under practical attacks, and we believe more advanced hash generation methods are necessary to avoid these attacks.

  5. A high-order multi-zone cut-stencil method for numerical simulations of high-speed flows over complex geometries

    NASA Astrophysics Data System (ADS)

    Greene, Patrick T.; Eldredge, Jeff D.; Zhong, Xiaolin; Kim, John

    2016-07-01

    In this paper, we present a method for performing uniformly high-order direct numerical simulations of high-speed flows over arbitrary geometries. The method was developed with the goal of simulating and studying the effects of complex isolated roughness elements on the stability of hypersonic boundary layers. The simulations are carried out on Cartesian grids with the geometries imposed by a third-order cut-stencil method. A fifth-order hybrid weighted essentially non-oscillatory scheme was implemented to capture any steep gradients in the flow created by the geometries and a third-order Runge-Kutta method is used for time advancement. A multi-zone refinement method was also utilized to provide extra resolution at locations with expected complex physics. The combination results in a globally fourth-order scheme in space and third order in time. Results confirming the method's high order of convergence are shown. Two-dimensional and three-dimensional test cases are presented and show good agreement with previous results. A simulation of Mach 3 flow over the logo of the Ubuntu Linux distribution is shown to demonstrate the method's capabilities for handling complex geometries. Results for Mach 6 wall-bounded flow over a three-dimensional cylindrical roughness element are also presented. The results demonstrate that the method is a promising tool for the study of hypersonic roughness-induced transition.

  6. Comparison of Five System Identification Algorithms for Rotorcraft Higher Harmonic Control

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1998-01-01

    This report presents an analysis and performance comparison of five system identification algorithms. The methods are presented in the context of identifying a frequency-domain transfer matrix for the higher harmonic control (HHC) of helicopter vibration. The five system identification algorithms include three previously proposed methods: (1) the weighted-least- squares-error approach (in moving-block format), (2) the Kalman filter method, and (3) the least-mean-squares (LMS) filter method. In addition there are two new ones: (4) a generalized Kalman filter method and (5) a generalized LMS filter method. The generalized Kalman filter method and the generalized LMS filter method were derived as extensions of the classic methods to permit identification by using more than one measurement per identification cycle. Simulation results are presented for conditions ranging from the ideal case of a stationary transfer matrix and no measurement noise to the more complex cases involving both measurement noise and transfer-matrix variation. Both open-loop identification and closed- loop identification were simulated. Closed-loop mode identification was more challenging than open-loop identification because of the decreasing signal-to-noise ratio as the vibration became reduced. The closed-loop simulation considered both local-model identification, with measured vibration feedback and global-model identification with feedback of the identified uncontrolled vibration. The algorithms were evaluated in terms of their accuracy, stability, convergence properties, computation speeds, and relative ease of implementation.

  7. Estimation of Time-Varying Pilot Model Parameters

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2011-01-01

    Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.

  8. Taming Many-Parameter BSM Models with Bayesian Neural Networks

    NASA Astrophysics Data System (ADS)

    Kuchera, M. P.; Karbo, A.; Prosper, H. B.; Sanchez, A.; Taylor, J. Z.

    2017-09-01

    The search for physics Beyond the Standard Model (BSM) is a major focus of large-scale high energy physics experiments. One method is to look for specific deviations from the Standard Model that are predicted by BSM models. In cases where the model has a large number of free parameters, standard search methods become intractable due to computation time. This talk presents results using Bayesian Neural Networks, a supervised machine learning method, to enable the study of higher-dimensional models. The popular phenomenological Minimal Supersymmetric Standard Model was studied as an example of the feasibility and usefulness of this method. Graphics Processing Units (GPUs) are used to expedite the calculations. Cross-section predictions for 13 TeV proton collisions will be presented. My participation in the Conference Experience for Undergraduates (CEU) in 2004-2006 exposed me to the national and global significance of cutting-edge research. At the 2005 CEU, I presented work from the previous summer's SULI internship at Lawrence Berkeley Laboratory, where I learned to program while working on the Majorana Project. That work inspired me to follow a similar research path, which led me to my current work on computational methods applied to BSM physics.

  9. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    PubMed Central

    CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this study. PMID:24037076

  10. Mapping loci influencing blood pressure in the Framingham pedigrees using model-free LOD score analysis of a quantitative trait.

    PubMed

    Knight, Jo; North, Bernard V; Sham, Pak C; Curtis, David

    2003-12-31

    This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits.

  11. Mapping loci influencing blood pressure in the Framingham pedigrees using model-free LOD score analysis of a quantitative trait

    PubMed Central

    Knight, Jo; North, Bernard V; Sham, Pak C; Curtis, David

    2003-01-01

    This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits. PMID:14975142

  12. Computation of Pressurized Gas Bearings Using CE/SE Method

    NASA Technical Reports Server (NTRS)

    Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

    2003-01-01

    The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

  13. The magnetofection method: using magnetic force to enhance gene delivery.

    PubMed

    Plank, Christian; Schillinger, Ulrike; Scherer, Franz; Bergemann, Christian; Rémy, Jean-Serge; Krötz, Florian; Anton, Martina; Lausier, Jim; Rosenecker, Joseph

    2003-05-01

    In order to enhance and target gene delivery we have previously established a novel method, termed magnetofection, which uses magnetic force acting on gene vectors that are associated with magnetic particles. Here we review the benefits, the mechanism and the potential of the method with regard to overcoming physical limitations to gene delivery. Magnetic particle chemistry and physics are discussed, followed by a detailed presentation of vector formulation and optimization work. While magnetofection does not necessarily improve the overall performance of any given standard gene transfer method in vitro, its major potential lies in the extraordinarily rapid and efficient transfection at low vector doses and the possibility of remotely controlled vector targeting in vivo.

  14. Fast focus estimation using frequency analysis in digital holography.

    PubMed

    Oh, Seungtaik; Hwang, Chi-Young; Jeong, Il Kwon; Lee, Sung-Keun; Park, Jae-Hyeung

    2014-11-17

    A novel fast frequency-based method to estimate the focus distance of digital hologram for a single object is proposed. The focus distance is computed by analyzing the distribution of intersections of smoothed-rays. The smoothed-rays are determined by the directions of energy flow which are computed from local spatial frequency spectrum based on the windowed Fourier transform. So our method uses only the intrinsic frequency information of the optical field on the hologram and therefore does not require any sequential numerical reconstructions and focus detection techniques of conventional photography, both of which are the essential parts in previous methods. To show the effectiveness of our method, numerical results and analysis are presented as well.

  15. A novel genome signature based on inter-nucleotide distances profiles for visualization of metagenomic data

    NASA Astrophysics Data System (ADS)

    Xie, Xian-Hua; Yu, Zu-Guo; Ma, Yuan-Lin; Han, Guo-Sheng; Anh, Vo

    2017-09-01

    There has been a growing interest in visualization of metagenomic data. The present study focuses on the visualization of metagenomic data using inter-nucleotide distances profile. We first convert the fragment sequences into inter-nucleotide distances profiles. Then we analyze these profiles by principal component analysis. Finally the principal components are used to obtain the 2-D scattered plot according to their source of species. We name our method as inter-nucleotide distances profiles (INP) method. Our method is evaluated on three benchmark data sets used in previous published papers. Our results demonstrate that the INP method is good, alternative and efficient for visualization of metagenomic data.

  16. Gaze Estimation Method Using Analysis of Electrooculogram Signals and Kinect Sensor

    PubMed Central

    Tanno, Koichi

    2017-01-01

    A gaze estimation system is one of the communication methods for severely disabled people who cannot perform gestures and speech. We previously developed an eye tracking method using a compact and light electrooculogram (EOG) signal, but its accuracy is not very high. In the present study, we conducted experiments to investigate the EOG component strongly correlated with the change of eye movements. The experiments in this study are of two types: experiments to see objects only by eye movements and experiments to see objects by face and eye movements. The experimental results show the possibility of an eye tracking method using EOG signals and a Kinect sensor. PMID:28912800

  17. Signal Analysis Algorithms for Optimized Fitting of Nonresonant Laser Induced Thermal Acoustics Damped Sinusoids

    NASA Technical Reports Server (NTRS)

    Balla, R. Jeffrey; Miller, Corey A.

    2008-01-01

    This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.

  18. Kinematic precision of gear trains

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.

    1983-01-01

    Kinematic precision is affected by errors which are the result of either intentional adjustments or accidental defects in manufacturing and assembly of gear trains. A method for the determination of kinematic precision of gear trains is described. The method is based on the exact kinematic relations for the contact point motions of the gear tooth surfaces under the influence of errors. An approximate method is also explained. Example applications of the general approximate methods are demonstrated for gear trains consisting of involute (spur and helical) gears, circular arc (Wildhaber-Novikov) gears, and spiral bevel gears. Gear noise measurements from a helicopter transmission are presented and discussed with relation to the kinematic precision theory. Previously announced in STAR as N82-32733

  19. A comparison of numerical methods for the prediction of two-dimensional heat transfer in an electrothermal deicer pad. M.S. Thesis. Final Contractor Report

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    1988-01-01

    Transient, numerical simulations of the deicing of composite aircraft components by electrothermal heating have been performed in a 2-D rectangular geometry. Seven numerical schemes and four solution methods were used to find the most efficient numerical procedure for this problem. The phase change in the ice was simulated using the Enthalpy method along with the Method for Assumed States. Numerical solutions illustrating deicer performance for various conditions are presented. Comparisons are made with previous numerical models and with experimental data. The simulation can also be used to solve a variety of other heat conduction problems involving composite bodies.

  20. Detecting Corresponding Vertex Pairs between Planar Tessellation Datasets with Agglomerative Hierarchical Cell-Set Matching.

    PubMed

    Huh, Yong; Yu, Kiyun; Park, Woojin

    2016-01-01

    This paper proposes a method to detect corresponding vertex pairs between planar tessellation datasets. Applying an agglomerative hierarchical co-clustering, the method finds geometrically corresponding cell-set pairs from which corresponding vertex pairs are detected. Then, the map transformation is performed with the vertex pairs. Since these pairs are independently detected for each corresponding cell-set pairs, the method presents improved matching performance regardless of locally uneven positional discrepancies between dataset. The proposed method was applied to complicated synthetic cell datasets assumed as a cadastral map and a topographical map, and showed an improved result with the F-measures of 0.84 comparing to a previous matching method with the F-measure of 0.48.

  1. N-of-1-pathways MixEnrich: advancing precision medicine via single-subject analysis in discovering dynamic changes of transcriptomes.

    PubMed

    Li, Qike; Schissler, A Grant; Gardeux, Vincent; Achour, Ikbel; Kenost, Colleen; Berghout, Joanne; Li, Haiquan; Zhang, Hao Helen; Lussier, Yves A

    2017-05-24

    Transcriptome analytic tools are commonly used across patient cohorts to develop drugs and predict clinical outcomes. However, as precision medicine pursues more accurate and individualized treatment decisions, these methods are not designed to address single-patient transcriptome analyses. We previously developed and validated the N-of-1-pathways framework using two methods, Wilcoxon and Mahalanobis Distance (MD), for personal transcriptome analysis derived from a pair of samples of a single patient. Although, both methods uncover concordantly dysregulated pathways, they are not designed to detect dysregulated pathways with up- and down-regulated genes (bidirectional dysregulation) that are ubiquitous in biological systems. We developed N-of-1-pathways MixEnrich, a mixture model followed by a gene set enrichment test, to uncover bidirectional and concordantly dysregulated pathways one patient at a time. We assess its accuracy in a comprehensive simulation study and in a RNA-Seq data analysis of head and neck squamous cell carcinomas (HNSCCs). In presence of bidirectionally dysregulated genes in the pathway or in presence of high background noise, MixEnrich substantially outperforms previous single-subject transcriptome analysis methods, both in the simulation study and the HNSCCs data analysis (ROC Curves; higher true positive rates; lower false positive rates). Bidirectional and concordant dysregulated pathways uncovered by MixEnrich in each patient largely overlapped with the quasi-gold standard compared to other single-subject and cohort-based transcriptome analyses. The greater performance of MixEnrich presents an advantage over previous methods to meet the promise of providing accurate personal transcriptome analysis to support precision medicine at point of care.

  2. Event Networks and the Identification of Crime Pattern Motifs

    PubMed Central

    2015-01-01

    In this paper we demonstrate the use of network analysis to characterise patterns of clustering in spatio-temporal events. Such clustering is of both theoretical and practical importance in the study of crime, and forms the basis for a number of preventative strategies. However, existing analytical methods show only that clustering is present in data, while offering little insight into the nature of the patterns present. Here, we show how the classification of pairs of events as close in space and time can be used to define a network, thereby generalising previous approaches. The application of graph-theoretic techniques to these networks can then offer significantly deeper insight into the structure of the data than previously possible. In particular, we focus on the identification of network motifs, which have clear interpretation in terms of spatio-temporal behaviour. Statistical analysis is complicated by the nature of the underlying data, and we provide a method by which appropriate randomised graphs can be generated. Two datasets are used as case studies: maritime piracy at the global scale, and residential burglary in an urban area. In both cases, the same significant 3-vertex motif is found; this result suggests that incidents tend to occur not just in pairs, but in fact in larger groups within a restricted spatio-temporal domain. In the 4-vertex case, different motifs are found to be significant in each case, suggesting that this technique is capable of discriminating between clustering patterns at a finer granularity than previously possible. PMID:26605544

  3. [Alkaline phosphatase activity in blood group B or O secretors is fluctuated by the dinner intake of previous night].

    PubMed

    Matsushita, Makoto; Harajiri, Sanae; Tabata, Shiori; Yukimasa, Nobuyasu; Muramoto, Yoshimi; Komoda, Tsugikazu

    2013-04-01

    We previously reported that two intestinal alkaline phosphatase (IAP) isoforms, high molecular mass IAP (HIAP) and normal molecular mass IAP (NIAP), appear in healthy serum with our Triton-PAGE method for determination of ALP isozymes. In addition, HIAP is chiefly present in blood group B or O secretors, and a large amount of NIAP is secreted into the circulation after high-fat meal in blood group B or O secretors. In the present paper, we investigated the relationship between alkaline phosphatase (ALP) activity in early morning with the patient in a fasted state and the dinner intake of previous night. Two types of dinner were prepared; a low-fat meal (520 kcal), and a high-fat meal (1,040 kcal). Subjects ate the 2 types of dinner on different days. The mean ALP activities at 14 h after high-fat meal ingestion in blood group B or O secretors (n=14) from JSCC and IFCC methods were 8.8% and 5.2% higher than those at 14 h after low-fat meal ingestion in blood group B or O secretors, respectively. The increases in ALP activity between after high-fat meal and low-fat meal were nearly identical to the increases in NIAP activity. These results suggest that a high-fat meal is more likely to affect ALP activity at the early morning with the patient in a fasted state in blood group B or O secretors.

  4. Exploiting Task Constraints for Self-Calibrated Brain-Machine Interface Control Using Error-Related Potentials

    PubMed Central

    Iturrate, Iñaki; Grizou, Jonathan; Omedes, Jason; Oudeyer, Pierre-Yves; Lopes, Manuel; Montesano, Luis

    2015-01-01

    This paper presents a new approach for self-calibration BCI for reaching tasks using error-related potentials. The proposed method exploits task constraints to simultaneously calibrate the decoder and control the device, by using a robust likelihood function and an ad-hoc planner to cope with the large uncertainty resulting from the unknown task and decoder. The method has been evaluated in closed-loop online experiments with 8 users using a previously proposed BCI protocol for reaching tasks over a grid. The results show that it is possible to have a usable BCI control from the beginning of the experiment without any prior calibration. Furthermore, comparisons with simulations and previous results obtained using standard calibration hint that both the quality of recorded signals and the performance of the system were comparable to those obtained with a standard calibration approach. PMID:26131890

  5. Improved determination of particulate absorption from combined filter pad and PSICAM measurements.

    PubMed

    Lefering, Ina; Röttgers, Rüdiger; Weeks, Rebecca; Connor, Derek; Utschig, Christian; Heymann, Kerstin; McKee, David

    2016-10-31

    Filter pad light absorption measurements are subject to two major sources of experimental uncertainty: the so-called pathlength amplification factor, β, and scattering offsets, o, for which previous null-correction approaches are limited by recent observations of non-zero absorption in the near infrared (NIR). A new filter pad absorption correction method is presented here which uses linear regression against point-source integrating cavity absorption meter (PSICAM) absorption data to simultaneously resolve both β and the scattering offset. The PSICAM has previously been shown to provide accurate absorption data, even in highly scattering waters. Comparisons of PSICAM and filter pad particulate absorption data reveal linear relationships that vary on a sample by sample basis. This regression approach provides significantly improved agreement with PSICAM data (3.2% RMS%E) than previously published filter pad absorption corrections. Results show that direct transmittance (T-method) filter pad absorption measurements perform effectively at the same level as more complex geometrical configurations based on integrating cavity measurements (IS-method and QFT-ICAM) because the linear regression correction compensates for the sensitivity to scattering errors in the T-method. This approach produces accurate filter pad particulate absorption data for wavelengths in the blue/UV and in the NIR where sensitivity issues with PSICAM measurements limit performance. The combination of the filter pad absorption and PSICAM is therefore recommended for generating full spectral, best quality particulate absorption data as it enables correction of multiple errors sources across both measurements.

  6. Pretreatment method for immunoassay of polychlorinated biphenyls in transformer oil using multilayer capillary column and microfluidic liquid-liquid partitioning.

    PubMed

    Aota, Arata; Date, Yasumoto; Terakado, Shingo; Ohmura, Naoya

    2013-01-01

    Polychlorinated biphenyls (PCBs) are persistent organic pollutants that are present in the insulating oil inside a large number of transformers. To aid in eliminating PCB-contaminated transformers, PCBs in oil need to be measured using a rapid and cost-effective analytical method. We previously reported a pretreatment method for the immunoassay of PCBs in oil using a large-scale multilayer column and a microchip with multiple microrecesses, which permitted concentrated solvent extraction. In this paper, we report on a more rapid and facile pretreatment method, without an evaporation process, by improving the column and the microchip. In a miniaturized column, the decomposition and separation of oil were completed in 2 min. PCBs can be eluted from the capillary column at concentrations seven-times higher than those from the previous column. The total volume of the microrecesses was increased by improving the microrecess structure, the enabling extraction of four-times the amount of PCBs achieved with the previous system. By interfacing the capillary column with the improved microchip, PCBs in the eluate from the column were extracted into dimethyl sulfoxide in microrecesses with high enrichment and without the need for evaporation. Pretreatment was completed within 20 min. The pretreated oil was analyzed using a flow-based kinetic exclusion immunoassay. The limit of detection of PCBs in oil was 0.15 mg kg(-1), which satisfies the criterion set in Japan of 0.5 mg kg(-1).

  7. Simulating the universe(s) III: observables for the full bubble collision spacetime

    NASA Astrophysics Data System (ADS)

    Johnson, Matthew C.; Wainwright, Carroll L.; Aguirre, Anthony; Peiris, Hiranya V.

    2016-07-01

    This is the third paper in a series establishing a quantitative relation between inflationary scalar field potential landscapes and the relic perturbations left by the collision between bubbles produced during eternal inflation. We introduce a new method for computing cosmological observables from numerical relativity simulations of bubble collisions in one space and one time dimension. This method tiles comoving hypersurfaces with locally-perturbed Friedmann-Robertson-Walker coordinate patches. The method extends previous work, which was limited to the spacetime region just inside the future light cone of the collision, and allows us to explore the full bubble-collision spacetime. We validate our new methods against previous work, and present a full set of predictions for the comoving curvature perturbation and local negative spatial curvature produced by identical and non-identical bubble collisions, in single scalar field models of eternal inflation. In both collision types, there is a non-zero contribution to the spatial curvature and cosmic microwave background quadrupole. Some collisions between non-identical bubbles excite wall modes, giving extra structure to the predicted temperature anisotropies. We comment on the implications of our results for future observational searches. For non-identical bubble collisions, we also find that the surfaces of constant field can readjust in the presence of a collision to produce spatially infinite sections that become nearly homogeneous deep into the region affected by the collision. Contrary to previous assumptions, this is true even in the bubble into which the domain wall is accelerating.

  8. Torsional anharmonicity in the conformational thermodynamics of flexible molecules

    NASA Astrophysics Data System (ADS)

    Miller, Thomas F., III; Clary, David C.

    We present an algorithm for calculating the conformational thermodynamics of large, flexible molecules that combines ab initio electronic structure theory calculations with a torsional path integral Monte Carlo (TPIMC) simulation. The new algorithm overcomes the previous limitations of the TPIMC method by including the thermodynamic contributions of non-torsional vibrational modes and by affordably incorporating the ab initio calculation of conformer electronic energies, and it improves the conventional ab initio treatment of conformational thermodynamics by accounting for the anharmonicity of the torsional modes. Using previously published ab initio results and new TPIMC calculations, we apply the algorithm to the conformers of the adrenaline molecule.

  9. Vibrational excitation and vibrationally resolved electronic excitation cross sections of positron-H2 scattering

    NASA Astrophysics Data System (ADS)

    Zammit, Mark; Fursa, Dmitry; Savage, Jeremy; Bray, Igor

    2016-09-01

    Vibrational excitation and vibrationally resolved electronic excitation cross sections of positron-H2 scattering have been calculated using the single-centre molecular convergent close-coupling (CCC) method. The adiabatic-nuclei approximation was utilized to model the above scattering processes and obtain the vibrationally resolved positron-H2 scattering length. As previously demonstrated, the CCC results are converged and accurately account for virtual and physical positronium formation by coupling basis functions with large orbital angular momentum. Here vibrationally resolved integrated and differential cross sections are presented over a wide energy range and compared with previous calculations and available experiments. Los Alamos National Laboratory and Curtin University.

  10. Dynamic thermal expansivity of liquids near the glass transition.

    PubMed

    Niss, Kristine; Gundermann, Ditte; Christensen, Tage; Dyre, Jeppe C

    2012-04-01

    Based on previous works on polymers by Bauer et al. [Phys. Rev. E 61, 1755 (2000)], this paper describes a capacitative method for measuring the dynamical expansion coefficient of a viscous liquid. Data are presented for the glass-forming liquid tetramethyl tetraphenyl trisiloxane (DC704) in the ultraviscous regime. Compared to the method of Bauer et al., the dynamical range has been extended by making time-domain experiments and by making very small and fast temperature steps. The modeling of the experiment presented in this paper includes the situation in which the capacitor is not full because the liquid contracts when cooling from room temperature down to around the glass-transition temperature, which is relevant when measuring on a molecular liquid rather than a polymer.

  11. A simple method to derive bounds on the size and to train multilayer neural networks

    NASA Technical Reports Server (NTRS)

    Sartori, Michael A.; Antsaklis, Panos J.

    1991-01-01

    A new derivation is presented for the bounds on the size of a multilayer neural network to exactly implement an arbitrary training set; namely, the training set can be implemented with zero error with two layers and with the number of the hidden-layer neurons equal to no.1 is greater than p - 1. The derivation does not require the separation of the input space by particular hyperplanes, as in previous derivations. The weights for the hidden layer can be chosen almost arbitrarily, and the weights for the output layer can be found by solving no.1 + 1 linear equations. The method presented exactly solves (M), the multilayer neural network training problem, for any arbitrary training set.

  12. Effect of Cues to Increase Sound Pressure Level on Respiratory Kinematic Patterns during Connected Speech

    ERIC Educational Resources Information Center

    Huber, Jessica E.

    2007-01-01

    Purpose: This study examined the response of the respiratory system to 3 cues used to elicit increased vocal loudness to determine whether the effects of cueing, shown previously in sentence tasks, were present in connected speech tasks and to describe differences among tasks. Method: Fifteen young men and 15 young women produced a 2-paragraph…

  13. Changes in Acoustic Characteristics of the Voice across the Life Span: Measures from Individuals 4-93 Years of Age

    ERIC Educational Resources Information Center

    Stathopoulos, Elaine T.; Huber, Jessica E.; Sussman, Joan E.

    2011-01-01

    Purpose: The purpose of the present investigation was to examine acoustic voice changes across the life span. Previous voice production investigations used small numbers of participants, had limited age ranges, and produced contradictory results. Method: Voice recordings were made from 192 male and female participants 4-93 years of age. Acoustic…

  14. Creating and Sustaining Secondary Schools' Success: Sandfields, Cwmtawe, and the Neath-Port Talbot Local Authority's High Reliability Schools Reform

    ERIC Educational Resources Information Center

    Stringfield, Sam; Reynolds, David; Schaffer, Eugene

    2016-01-01

    This chapter presents data from a 15-year, mixed-methods school improvement effort. The High Reliability Schools (HRS) reform made use of previous research on school effects and on High Reliability Organizations (HROs). HROs are organizations in various parts of our cultures that are required to operate successfully "the first time, every…

  15. Spinning of Fibers from Aqueous Solutions

    DTIC Science & Technology

    2003-08-01

    recombinant silk product BioSteel . Publications, patents and presentations 1. Arcidiacono, S., et al., Purification and characterization of recombinant...ABSTRACT Previous funding supporting this research focused primarily on development of the aqueous-based method for processing silk into spin solutions. Much...of this effort consisted of production of recombinant silk protein in bacterial and yeast expression systems. In spite of the small quantities

  16. Integrated Photonics Research Topical Meeting (1993)

    DTIC Science & Technology

    1994-06-01

    81 DMD Time Domain Methods .................................. 107 IME Photonic Circuits and Lightwave Reception...index change near the band edge using a small interference -ellipsometry bridge and presented several results of nt of refractive index change An[51. In... interference -ellipsometry bridge at the photon energies near Eg, especially E>Eg, and compared to previous theories. [11. Manning, R Olshans]y, and C. B. Su

  17. Science Education as Public and Social Wealth: The Notion of Citizenship from a European Perspective

    ERIC Educational Resources Information Center

    Siatras, Anastasios; Koumaras, Panagiotis

    2013-01-01

    In this paper, (a) we present a framework for developing a science content (i.e., science concepts, scientific methods, scientific mindset, and problem-solving strategies for socio-scientific issues) used to design the new Cypriot science curriculum aiming at ensuring a democratic and human society, (b) we use the previous framework to explore the…

  18. An improved semi-implicit method for structural dynamics analysis

    NASA Technical Reports Server (NTRS)

    Park, K. C.

    1982-01-01

    A semi-implicit algorithm is presented for direct time integration of the structural dynamics equations. The algorithm avoids the factoring of the implicit difference solution matrix and mitigates the unacceptable accuracy losses which plagued previous semi-implicit algorithms. This substantial accuracy improvement is achieved by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.

  19. Use of the forest vegetation simulator to quantify disturbance activities in state and transition models

    Treesearch

    Reuben Weisz; Don Vandendriesche

    2012-01-01

    The Forest Vegetation Simulator (FVS) has been used to provide rates of natural growth transitions under endemic conditions for use in State and Transition Models (STMs). This process has previously been presented. This paper expands on that work by citing the methods used to capture resultant vegetation states following disturbance activities; be it of natural causes...

  20. Prediction of Future High Caries Increments for Children in a School Dental Service and in Private Practice.

    ERIC Educational Resources Information Center

    Imfeld, Thomas N.; And Others

    1995-01-01

    A method for predicting high dental caries increments for children, based on previous research, is presented. Three clinical findings were identified as predictors: number of sound primary molars, number of discolored pits/fissures on first permanent molars, and number of buccal and lingual smooth surfaces of first permanent molars with white…

Top