Sample records for current analysis methods

  1. Assessment of current AASHTO LRFD methods for static pile capacity analysis in Rhode Island soils.

    DOT National Transportation Integrated Search

    2013-07-01

    This report presents an assessment of current AASHTO LRFD methods for static pile capacity analysis in Rhode : Island soils. Current static capacity methods and associated resistance factors are based on pile load test data in sands : and clays. Some...

  2. PROOF OF CONCEPT FOR A HUMAN RELIABILITY ANALYSIS METHOD FOR HEURISTIC USABILITY EVALUATION OF SOFTWARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; David I. Gertman; Jeffrey C. Joe

    2005-09-01

    An ongoing issue within human-computer interaction (HCI) is the need for simplified or “discount” methods. The current economic slowdown has necessitated innovative methods that are results driven and cost effective. The myriad methods of design and usability are currently being cost-justified, and new techniques are actively being explored that meet current budgets and needs. Recent efforts in human reliability analysis (HRA) are highlighted by the ten-year development of the Standardized Plant Analysis Risk HRA (SPAR-H) method. The SPAR-H method has been used primarily for determining humancentered risk at nuclear power plants. The SPAR-H method, however, shares task analysis underpinnings withmore » HCI. Despite this methodological overlap, there is currently no HRA approach deployed in heuristic usability evaluation. This paper presents an extension of the existing SPAR-H method to be used as part of heuristic usability evaluation in HCI.« less

  3. Motor current signature analysis method for diagnosing motor operated devices

    DOEpatents

    Haynes, Howard D.; Eissenberg, David M.

    1990-01-01

    A motor current noise signature analysis method and apparatus for remotely monitoring the operating characteristics of an electric motor-operated device such as a motor-operated valve. Frequency domain signal analysis techniques are applied to a conditioned motor current signal to distinctly identify various operating parameters of the motor driven device from the motor current signature. The signature may be recorded and compared with subsequent signatures to detect operating abnormalities and degradation of the device. This diagnostic method does not require special equipment to be installed on the motor-operated device, and the current sensing may be performed at remote control locations, e.g., where the motor-operated devices are used in accessible or hostile environments.

  4. Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems

    NASA Technical Reports Server (NTRS)

    Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.

    2005-01-01

    The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.

  5. Nodal Analysis Optimization Based on the Use of Virtual Current Sources: A Powerful New Pedagogical Method

    ERIC Educational Resources Information Center

    Chatzarakis, G. E.

    2009-01-01

    This paper presents a new pedagogical method for nodal analysis optimization based on the use of virtual current sources, applicable to any linear electric circuit (LEC), regardless of its complexity. The proposed method leads to straightforward solutions, mostly arrived at by inspection. Furthermore, the method is easily adapted to computer…

  6. Force analysis of magnetic bearings with power-saving controls

    NASA Technical Reports Server (NTRS)

    Johnson, Dexter; Brown, Gerald V.; Inman, Daniel J.

    1992-01-01

    Most magnetic bearing control schemes use a bias current with a superimposed control current to linearize the relationship between the control current and the force it delivers. For most operating conditions, the existence of the bias current requires more power than alternative methods that do not use conventional bias. Two such methods are examined which diminish or eliminate bias current. In the typical bias control scheme it is found that for a harmonic control force command into a voltage limited transconductance amplifier, the desired force output is obtained only up to certain combinations of force amplitude and frequency. Above these values, the force amplitude is reduced and a phase lag occurs. The power saving alternative control schemes typically exhibit such deficiencies at even lower command frequencies and amplitudes. To assess the severity of these effects, a time history analysis of the force output is performed for the bias method and the alternative methods. Results of the analysis show that the alternative approaches may be viable. The various control methods examined were mathematically modeled using nondimensionalized variables to facilitate comparison of the various methods.

  7. Art, Meet Chemistry; Chemistry, Meet Art: Case Studies, Current Literature, and Instrumental Methods Combined to Create a Hands-On Experience for Nonmajors and Instrumental Analysis Students

    ERIC Educational Resources Information Center

    Nivens, Delana A.; Padgett, Clifford W.; Chase, Jeffery M.; Verges, Katie J.; Jamieson, Deborah S.

    2010-01-01

    Case studies and current literature are combined with spectroscopic analysis to provide a unique chemistry experience for art history students and to provide a unique inquiry-based laboratory experiment for analytical chemistry students. The XRF analysis method was used to demonstrate to nonscience majors (art history students) a powerful…

  8. Gait Analysis Using Wearable Sensors

    PubMed Central

    Tao, Weijun; Liu, Tao; Zheng, Rencheng; Feng, Hutian

    2012-01-01

    Gait analysis using wearable sensors is an inexpensive, convenient, and efficient manner of providing useful information for multiple health-related applications. As a clinical tool applied in the rehabilitation and diagnosis of medical conditions and sport activities, gait analysis using wearable sensors shows great prospects. The current paper reviews available wearable sensors and ambulatory gait analysis methods based on the various wearable sensors. After an introduction of the gait phases, the principles and features of wearable sensors used in gait analysis are provided. The gait analysis methods based on wearable sensors is divided into gait kinematics, gait kinetics, and electromyography. Studies on the current methods are reviewed, and applications in sports, rehabilitation, and clinical diagnosis are summarized separately. With the development of sensor technology and the analysis method, gait analysis using wearable sensors is expected to play an increasingly important role in clinical applications. PMID:22438763

  9. Demodulation circuit for AC motor current spectral analysis

    DOEpatents

    Hendrix, Donald E.; Smith, Stephen F.

    1990-12-18

    A motor current analysis method for the remote, noninvasive inspection of electric motor-operated systems. Synchronous amplitude demodulation and phase demodulation circuits are used singly and in combination along with a frequency analyzer to produce improved spectral analysis of load-induced frequencies present in the electric current flowing in a motor-driven system.

  10. Exploitation of SAR data for measurement of ocean currents and wave velocities

    NASA Technical Reports Server (NTRS)

    Shuchman, R. A.; Lyzenga, D. R.; Klooster, A., Jr.

    1981-01-01

    Methods of extracting information on ocean currents and wave orbital velocities from SAR data by an analysis of the Doppler frequency content of the data are discussed. The theory and data analysis methods are discussed, and results are presented for both aircraft and satellite (SEASAT) data sets. A method of measuring the phase velocity of a gravity wave field is also described. This method uses the shift in position of the wave crests on two images generated from the same data set using two separate Doppler bands. Results of the current measurements are pesented for 11 aircraft data sets and 4 SEASAT data sets.

  11. An optimized rapid bisulfite conversion method with high recovery of cell-free DNA.

    PubMed

    Yi, Shaohua; Long, Fei; Cheng, Juanbo; Huang, Daixin

    2017-12-19

    Methylation analysis of cell-free DNA is a encouraging tool for tumor diagnosis, monitoring and prognosis. Sensitivity of methylation analysis is a very important matter due to the tiny amounts of cell-free DNA available in plasma. Most current methods of DNA methylation analysis are based on the difference of bisulfite-mediated deamination of cytosine between cytosine and 5-methylcytosine. However, the recovery of bisulfite-converted DNA based on current methods is very poor for the methylation analysis of cell-free DNA. We optimized a rapid method for the crucial steps of bisulfite conversion with high recovery of cell-free DNA. A rapid deamination step and alkaline desulfonation was combined with the purification of DNA on a silica column. The conversion efficiency and recovery of bisulfite-treated DNA was investigated by the droplet digital PCR. The optimization of the reaction results in complete cytosine conversion in 30 min at 70 °C and about 65% of recovery of bisulfite-treated cell-free DNA, which is higher than current methods. The method allows high recovery from low levels of bisulfite-treated cell-free DNA, enhancing the analysis sensitivity of methylation detection from cell-free DNA.

  12. Superstatistics analysis of the ion current distribution function: Met3PbCl influence study.

    PubMed

    Miśkiewicz, Janusz; Trela, Zenon; Przestalski, Stanisław; Karcz, Waldemar

    2010-09-01

    A novel analysis of ion current time series is proposed. It is shown that higher (second, third and fourth) statistical moments of the ion current probability distribution function (PDF) can yield new information about ion channel properties. The method is illustrated on a two-state model where the PDF of the compound states are given by normal distributions. The proposed method was applied to the analysis of the SV cation channels of vacuolar membrane of Beta vulgaris and the influence of trimethyllead chloride (Met(3)PbCl) on the ion current probability distribution. Ion currents were measured by patch-clamp technique. It was shown that Met(3)PbCl influences the variance of the open-state ion current but does not alter the PDF of the closed-state ion current. Incorporation of higher statistical moments into the standard investigation of ion channel properties is proposed.

  13. 3D analysis of eddy current loss in the permanent magnet coupling.

    PubMed

    Zhu, Zina; Meng, Zhuo

    2016-07-01

    This paper first presents a 3D analytical model for analyzing the radial air-gap magnetic field between the inner and outer magnetic rotors of the permanent magnet couplings by using the Amperian current model. Based on the air-gap field analysis, the eddy current loss in the isolation cover is predicted according to the Maxwell's equations. A 3D finite element analysis model is constructed to analyze the magnetic field spatial distributions and vector eddy currents, and then the simulation results obtained are analyzed and compared with the analytical method. Finally, the current losses of two types of practical magnet couplings are measured in the experiment to compare with the theoretical results. It is concluded that the 3D analytical method of eddy current loss in the magnet coupling is viable and could be used for the eddy current loss prediction of magnet couplings.

  14. Direct-current arc and alternating-current spark emission spectrographic field methods for the semiquantitative analysis of geologic materials

    USGS Publications Warehouse

    Grimes, D.J.; Marranzino, A.P.

    1968-01-01

    Two spectrographic methods are used in mobile field laboratories of the U. S. Geological Survey. In the direct-current arc method, the ground sample is mixed with graphite powder, packed into an electrode crater, and burned to completion. Thirty elements are determined. In the spark method, the sample, ground to pass a 150-mesh screen, is digested in hydrofluoric acid followed by evaporation to dryness and dissolution in aqua regia. The solution is fed into the spark gap by means of a rotating-disk electrode arrangement and is excited with an alternating-current spark discharge. Fourteen elements are determined. In both techniques, light is recorded on Spectrum Analysis No. 1, 35-millimeter film, and the spectra are compared visually with those of standard films.

  15. Gas stream analysis using voltage-current time differential operation of electrochemical sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woo, Leta Yar-Li; Glass, Robert Scott; Fitzpatrick, Joseph Jay

    A method for analysis of a gas stream. The method includes identifying an affected region of an affected waveform signal corresponding to at least one characteristic of the gas stream. The method also includes calculating a voltage-current time differential between the affected region of the affected waveform signal and a corresponding region of an original waveform signal. The affected region and the corresponding region of the waveform signals have a sensitivity specific to the at least one characteristic of the gas stream. The method also includes generating a value for the at least one characteristic of the gas stream basedmore » on the calculated voltage-current time differential.« less

  16. Application of the pulsed fast/thermal neutron method for soil elemental analysis

    USDA-ARS?s Scientific Manuscript database

    Soil science is a research field where physic concepts and experimental methods are widely used, particularly in agro-chemistry and soil elemental analysis. Different methods of analysis are currently available. The evolution of nuclear physics (methodology and instrumentation) combined with the ava...

  17. Propfan experimental data analysis

    NASA Technical Reports Server (NTRS)

    Vernon, David F.; Page, Gregory S.; Welge, H. Robert

    1984-01-01

    A data reduction method, which is consistent with the performance prediction methods used for analysis of new aircraft designs, is defined and compared to the method currently used by NASA using data obtained from an Ames Res. Center 11 foot transonic wind tunnel test. Pressure and flow visualization data from the Ames test for both the powered straight underwing nacelle, and an unpowered contoured overwing nacelle installation is used to determine the flow phenomena present for a wind mounted turboprop installation. The test data is compared to analytic methods, showing the analytic methods to be suitable for design and analysis of new configurations. The data analysis indicated that designs with zero interference drag levels are achieveable with proper wind and nacelle tailoring. A new overwing contoured nacelle design and a modification to the wing leading edge extension for the current wind tunnel model design are evaluated. Hardware constraints of the current model parts prevent obtaining any significant performance improvement due to a modified nacelle contouring. A new aspect ratio wing design for an up outboard rotation turboprop installation is defined, and an advanced contoured nacelle is provided.

  18. Comparison of Past, Present, and Future Volume Estimation Methods for Tennessee

    Treesearch

    Stanley J. Zarnoch; Alexander Clark; Ray A. Souter

    2003-01-01

    Forest Inventory and Analysis 1999 survey data for Tennessee were used to compare stem-volume estimates obtained using a previous method, the current method, and newly developed taper models that will be used in the future. Compared to the current method, individual tree volumes were consistently underestimated with the previous method, especially for the hardwoods....

  19. Analysis of Explosives in Soil Using Solid Phase Microextraction and Gas Chromatography: Environmental Analysis

    DTIC Science & Technology

    2006-01-01

    ENVIRONMENTAL ANALYSIS Analysis of Explosives in Soil Using Solid Phase Microextraction and Gas Chromatography Howard T. Mayfield Air Force Research...Abstract: Current methods for the analysis of explosives in soils utilize time consuming sample preparation workups and extractions. The method detection...chromatography/mass spectrometry to provide a con- venient and sensitive analysis method for explosives in soil. Keywords: Explosives, TNT, solid phase

  20. Electron density and electron temperature measurement in a bi-Maxwellian electron distribution using a derivative method of Langmuir probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Ikjin; Chung, ChinWook; Youn Moon, Se

    2013-08-15

    In plasma diagnostics with a single Langmuir probe, the electron temperature T{sub e} is usually obtained from the slope of the logarithm of the electron current or from the electron energy probability functions of current (I)-voltage (V) curve. Recently, Chen [F. F. Chen, Phys. Plasmas 8, 3029 (2001)] suggested a derivative analysis method to obtain T{sub e} by the ratio between the probe current and the derivative of the probe current at a plasma potential where the ion current becomes zero. Based on this method, electron temperatures and electron densities were measured and compared with those from the electron energymore » distribution function (EEDF) measurement in Maxwellian and bi-Maxwellian electron distribution conditions. In a bi-Maxwellian electron distribution, we found the electron temperature T{sub e} obtained from the method is always lower than the effective temperatures T{sub eff} derived from EEDFs. The theoretical analysis for this is presented.« less

  1. Regression analysis of informative current status data with the additive hazards model.

    PubMed

    Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo

    2015-04-01

    This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.

  2. Generalized Full-Information Item Bifactor Analysis

    ERIC Educational Resources Information Center

    Cai, Li; Yang, Ji Seung; Hansen, Mark

    2011-01-01

    Full-information item bifactor analysis is an important statistical method in psychological and educational measurement. Current methods are limited to single-group analysis and inflexible in the types of item response models supported. We propose a flexible multiple-group item bifactor analysis framework that supports a variety of…

  3. Developing a Self-Report-Based Sequential Analysis Method for Educational Technology Systems: A Process-Based Usability Evaluation

    ERIC Educational Resources Information Center

    Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse

    2015-01-01

    The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…

  4. Numerical analysis method for linear induction machines.

    NASA Technical Reports Server (NTRS)

    Elliott, D. G.

    1972-01-01

    A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.

  5. Advantages of Social Network Analysis in Educational Research

    ERIC Educational Resources Information Center

    Ushakov, K. M.; Kukso, K. N.

    2015-01-01

    Currently one of the main tools for the large scale studies of schools is statistical analysis. Although it is the most common method and it offers greatest opportunities for analysis, there are other quantitative methods for studying schools, such as network analysis. We discuss the potential advantages that network analysis has for educational…

  6. Two dimensional distribution measurement of electric current generated in a polymer electrolyte fuel cell using 49 NMR surface coils.

    PubMed

    Ogawa, Kuniyasu; Sasaki, Tatsuyoshi; Yoneda, Shigeki; Tsujinaka, Kumiko; Asai, Ritsuko

    2018-05-17

    In order to increase the current density generated in a PEFC (polymer electrolyte fuel cell), a method for measuring the spatial distribution of both the current and the water content of the MEA (membrane electrode assembly) is necessary. Based on the frequency shifts of NMR (nuclear magnetic resonance) signals acquired from the water contained in the MEA using 49 NMR coils in a 7 × 7 arrangement inserted in the PEFC, a method for measuring the two-dimensional spatial distribution of electric current generated in a unit cell with a power generation area of 140 mm × 160 mm was devised. We also developed an inverse analysis method to determine the two-dimensional electric current distribution that can be applied to actual PEFC connections. Two analytical techniques, namely coarse graining of segments and stepwise search, were used to shorten the calculation time required for inverse analysis of the electric current map. Using this method and techniques, spatial distributions of electric current and water content in the MEA were obtained when the PEFC generated electric power at 100 A. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Methods for Human Dehydration Measurement

    NASA Astrophysics Data System (ADS)

    Trenz, Florian; Weigel, Robert; Hagelauer, Amelie

    2018-03-01

    The aim of this article is to give a broad overview of current methods for the identification and quantification of the human dehydration level. Starting off from most common clinical setups, including vital parameters and general patients' appearance, more quantifiable results from chemical laboratory and electromagnetic measurement methods will be reviewed. Different analysis methods throughout the electromagnetic spectrum, ranging from direct current (DC) conductivity measurements up to neutron activation analysis (NAA), are discussed on the base of published results. Finally, promising technologies, which allow for an integration of a dehydration assessment system in a compact and portable way, will be spotted.

  8. CFD Methods and Tools for Multi-Element Airfoil Analysis

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; George, Michael W. (Technical Monitor)

    1995-01-01

    This lecture will discuss the computational tools currently available for high-lift multi-element airfoil analysis. It will present an overview of a number of different numerical approaches, their current capabilities, short-comings, and computational costs. The lecture will be limited to viscous methods, including inviscid/boundary layer coupling methods, and incompressible and compressible Reynolds-averaged Navier-Stokes methods. Both structured and unstructured grid generation approaches will be presented. Two different structured grid procedures are outlined, one which uses multi-block patched grids, the other uses overset chimera grids. Turbulence and transition modeling will be discussed.

  9. Method for isolating nucleic acids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurt, Jr., Richard Ashley; Elias, Dwayne A.

    The current disclosure provides methods and kits for isolating nucleic acid from an environmental sample. The current methods and compositions further provide methods for isolating nucleic acids by reducing adsorption of nucleic acids by charged ions and particles within an environmental sample. The methods of the current disclosure provide methods for isolating nucleic acids by releasing adsorbed nucleic acids from charged particles during the nucleic acid isolation process. The current disclosure facilitates the isolation of nucleic acids of sufficient quality and quantity to enable one of ordinary skill in the art to utilize or analyze the isolated nucleic acids formore » a wide variety of applications including, sequencing or species population analysis.« less

  10. MAMA- User Feedback and Training Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, Reid B.; Ruggiero, Christy E.

    2014-05-21

    This document describes the current state of the MAMA (Morphological Analysis of Materials) software user identified bugs, issues, and requests for improvements. It also lists Current users and current training methods.

  11. A Self-Directed Method for Cell-Type Identification and Separation of Gene Expression Microarrays

    PubMed Central

    Zuckerman, Neta S.; Noam, Yair; Goldsmith, Andrea J.; Lee, Peter P.

    2013-01-01

    Gene expression analysis is generally performed on heterogeneous tissue samples consisting of multiple cell types. Current methods developed to separate heterogeneous gene expression rely on prior knowledge of the cell-type composition and/or signatures - these are not available in most public datasets. We present a novel method to identify the cell-type composition, signatures and proportions per sample without need for a-priori information. The method was successfully tested on controlled and semi-controlled datasets and performed as accurately as current methods that do require additional information. As such, this method enables the analysis of cell-type specific gene expression using existing large pools of publically available microarray datasets. PMID:23990767

  12. Mesh-matrix analysis method for electromagnetic launchers

    NASA Technical Reports Server (NTRS)

    Elliott, David G.

    1989-01-01

    The mesh-matrix method is a procedure for calculating the current distribution in the conductors of electromagnetic launchers with coil or flat-plate geometry. Once the current distribution is known the launcher performance can be calculated. The method divides the conductors into parallel current paths, or meshes, and finds the current in each mesh by matrix inversion. The author presents procedures for writing equations for the current and voltage relations for a few meshes to serve as a pattern for writing the computer code. An available subroutine package provides routines for field and flux coefficients and equation solution.

  13. The Current State of Human Performance Technology: A Citation Network Analysis of "Performance Improvement Quarterly," 1988-2010

    ERIC Educational Resources Information Center

    Cho, Yonjoo; Jo, Sung Jun; Park, Sunyoung; Kang, Ingu; Chen, Zengguan

    2011-01-01

    This study conducted a citation network analysis (CNA) of human performance technology (HPT) to examine its current state of the field. Previous reviews of the field have used traditional research methods, such as content analysis, survey, Delphi, and citation analysis. The distinctive features of CNA come from using a social network analysis…

  14. Parameter Analysis for Arc Snubber of EAST Neutral Beam Injector

    NASA Astrophysics Data System (ADS)

    Wang, Haitian; Li, Ge; Cao, Liang; Dang, Xiaoqiang; Fu, Peng

    2010-08-01

    According to the B-H curve and structural dimensions of the snubber by the Fink-Baker Method, the inductive voltage and the eddy current of any core tape with the thickness of the saturated regions are derived when the accelerator breakdown occurs. Using the Ampere's law, in each core tape, the eddy current of the core lamination is equal to the arc current, and the relation of the thickness of the saturated regions for different laminations can be deduced. The total equivalent resistance of the snubber can be obtained. The transient eddy current model based on the stray capacitance and the equivalent resistance is analyzed, and the solving process is given in detail. The exponential time constant and the arc current are obtained. Then, the maximum width of the lamination and the minimum thickness of the core tape are determined. The experimental time constant of the eddy current obtained, with or without the bias current, is approximately the same as that by the analytical method, which proves the accuracy of the adopted assumptions and the analysis method.

  15. Analysis and numerical modelling of eddy current damper for vibration problems

    NASA Astrophysics Data System (ADS)

    Irazu, L.; Elejabarrieta, M. J.

    2018-07-01

    This work discusses a contactless eddy current damper, which is used to attenuate structural vibration. Eddy currents can remove energy from dynamic systems without any contact and, thus, without adding mass or modifying the rigidity of the structure. An experimental modal analysis of a cantilever beam in the absence of and under a partial magnetic field is conducted in the bandwidth of 01 kHz. The results show that the eddy current phenomenon can attenuate the vibration of the entire structure without modifying the natural frequencies or the mode shapes of the structure itself. In this study, a new inverse method to numerically determine the dynamic properties of the contactless eddy current damper is proposed. The proposed inverse method and the eddy current model based on a lineal viscous force are validated by a practical application. The numerically obtained transfer function correlates with the experimental one, thus showing good agreement in the entire bandwidth of 01 kHz. The proposed method provides an easy and quick tool to model and predict the dynamic behaviour of the contactless eddy current damper, thereby avoiding the use of complex analytical models.

  16. Motor monitoring method and apparatus using high frequency current components

    DOEpatents

    Casada, D.A.

    1996-05-21

    A motor current analysis method and apparatus for monitoring electrical-motor-driven devices are disclosed. The method and apparatus utilize high frequency portions of the motor current spectra to evaluate the condition of the electric motor and the device driven by the electric motor. The motor current signal produced as a result of an electric motor is monitored and the low frequency components of the signal are removed by a high-pass filter. The signal is then analyzed to determine the condition of the electrical motor and the driven device. 16 figs.

  17. Motor monitoring method and apparatus using high frequency current components

    DOEpatents

    Casada, Donald A.

    1996-01-01

    A motor current analysis method and apparatus for monitoring electrical-motor-driven devices. The method and apparatus utilize high frequency portions of the motor current spectra to evaluate the condition of the electric motor and the device driven by the electric motor. The motor current signal produced as a result of an electric motor is monitored and the low frequency components of the signal are removed by a high-pass filter. The signal is then analyzed to determine the condition of the electrical motor and the driven device.

  18. REPRESENTATIVE SAMPLING AND ANALYSIS OF HETEROGENEOUS SOILS

    EPA Science Inventory

    Standard sampling and analysis methods for hazardous substances in contaminated soils currently are available and routinely employed. Standard methods inherently assume a homogeneous soil matrix and contaminant distribution; therefore only small sample quantities typically are p...

  19. An Improved Manual Method for NOx Emission Measurement.

    ERIC Educational Resources Information Center

    Dee, L. A.; And Others

    The current manual NO (x) sampling and analysis method was evaluated. Improved time-integrated sampling and rapid analysis methods were developed. In the new method, the sample gas is drawn through a heated bed of uniquely active, crystalline, Pb02 where NO (x) is quantitatively absorbed. Nitrate ion is later extracted with water and the…

  20. Motor current signature analysis for gearbox condition monitoring under transient speeds using wavelet analysis and dual-level time synchronous averaging

    NASA Astrophysics Data System (ADS)

    Bravo-Imaz, Inaki; Davari Ardakani, Hossein; Liu, Zongchang; García-Arribas, Alfredo; Arnaiz, Aitor; Lee, Jay

    2017-09-01

    This paper focuses on analyzing motor current signature for fault diagnosis of gearboxes operating under transient speed regimes. Two different strategies are evaluated, extensively tested and compared to analyze the motor current signature in order to implement a condition monitoring system for gearboxes in industrial machinery. A specially designed test bench is used, thoroughly monitored to fully characterize the experiments, in which gears in different health status are tested. The measured signals are analyzed using discrete wavelet decomposition, in different decomposition levels using a range of mother wavelets. Moreover, a dual-level time synchronous averaging analysis is performed on the same signal to compare the performance of the two methods. From both analyses, the relevant features of the signals are extracted and cataloged using a self-organizing map, which allows for an easy detection and classification of the diverse health states of the gears. The results demonstrate the effectiveness of both methods for diagnosing gearbox faults. A slightly better performance was observed for dual-level time synchronous averaging method. Based on the obtained results, the proposed methods can used as effective and reliable condition monitoring procedures for gearbox condition monitoring using only motor current signature.

  1. Variability of Currents in Great South Channel and Over Georges Bank: Observation and Modeling

    DTIC Science & Technology

    1992-06-01

    Rizzoli motivated me to study the driv:,: mechanism of stratified tidal rectification using diagnostic analysis methods . Conversations with Glen...drifter trajectories in the 1988 and 1989 surveys give further encouragement that the analysis method yields an accurate picture of the nontidal flow...harmonic truncation method . Scaling analysis argues that this method is not appropriate for a step topography because it is valid only when the

  2. Ion beam activation for materials analysis: Methods and application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conlon, T.W.

    1981-04-01

    A number of ion beam methods for materials analysis have been developed using Harwell's high voltage accelerators and these are currently being exploited for applications 'in house' and in industry. Ion beam activation is a relatively new area which has exhibited exceptional growth over the last few years. Activation by ion beams to produce a single dominant radioisotope as a surface label (thin layer activation or TLA) is becoming a mature technology offering ever increasing sensitivity for surface loss measurement (currently better than 0.1 ..mu..m or 10/sup -7/ cm/sup 3/ depending on the method of measurement) and remote monitoring ofmore » inaccessible components during studies of wear/erosion/ corrosion/sputtering and the like. With the increasingly established credibility of the method has come the realisation that: (i) more complex and even multiple activation profiles can be used to extract more information on the characteristics of the surface loss process, (ii) that an analogous method can be used even on radiation sensitive materials through the newly established indirect recoil implantation process. (iii) that there is scope for treatment of truly immovable objects through the implantation of fission fragments, (iv) there is vast potential in the area of activation analysis. The current state of development of these methods which greatly extend the scope of conventional TLA will be briefly reviewed. Current applications of these and TLA in industry are discussed.« less

  3. Estimation of hyper-parameters for a hierarchical model of combined cortical and extra-brain current sources in the MEG inverse problem.

    PubMed

    Morishige, Ken-ichi; Yoshioka, Taku; Kawawaki, Dai; Hiroe, Nobuo; Sato, Masa-aki; Kawato, Mitsuo

    2014-11-01

    One of the major obstacles in estimating cortical currents from MEG signals is the disturbance caused by magnetic artifacts derived from extra-cortical current sources such as heartbeats and eye movements. To remove the effect of such extra-brain sources, we improved the hybrid hierarchical variational Bayesian method (hyVBED) proposed by Fujiwara et al. (NeuroImage, 2009). hyVBED simultaneously estimates cortical and extra-brain source currents by placing dipoles on cortical surfaces as well as extra-brain sources. This method requires EOG data for an EOG forward model that describes the relationship between eye dipoles and electric potentials. In contrast, our improved approach requires no EOG and less a priori knowledge about the current variance of extra-brain sources. We propose a new method, "extra-dipole," that optimally selects hyper-parameter values regarding current variances of the cortical surface and extra-brain source dipoles. With the selected parameter values, the cortical and extra-brain dipole currents were accurately estimated from the simulated MEG data. The performance of this method was demonstrated to be better than conventional approaches, such as principal component analysis and independent component analysis, which use only statistical properties of MEG signals. Furthermore, we applied our proposed method to measured MEG data during covert pursuit of a smoothly moving target and confirmed its effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Eddy current analysis of cracks grown from surface defects and non-metallic particles

    NASA Astrophysics Data System (ADS)

    Cherry, Matthew R.; Hutson, Alisha; Aldrin, John C.; Shank, Jared

    2018-04-01

    Eddy current methods are sensitive to any discrete change in conductivity. Traditionally this has been used to determine the presence of a crack. However, other features that are not cracks such as non-metallic inclusions, carbide stringers and surface voids can cause an eddy current indication that could potentially lead to a reject of an in-service component. These features may not actually be lifelimiting, meaning NDE methods could reject components with remaining useful life. In-depth analysis of signals from eddy current sensors could provide a means of sorting between rejectable indications and false-calls from geometric and non-conductive features. In this project, cracks were grown from voids and non-metallic inclusions in a nickel-based super-alloy and eddy current analysis was performed on multiple intermediate steps of fatigue. Data were collected with multiple different ECT probes and at multiple frequencies, and the results were analyzed. The results show how cracks growing from non-metallic features can skew eddy current signals and make characterization a challenge. Modeling and simulation was performed with multiple analysis codes, and the models were found to be in good agreement with the data for cracks growing away from voids and non-metallic inclusions.

  5. Eddy current loss analysis of open-slot fault-tolerant permanent-magnet machines based on conformal mapping method

    NASA Astrophysics Data System (ADS)

    Ji, Jinghua; Luo, Jianhua; Lei, Qian; Bian, Fangfang

    2017-05-01

    This paper proposed an analytical method, based on conformal mapping (CM) method, for the accurate evaluation of magnetic field and eddy current (EC) loss in fault-tolerant permanent-magnet (FTPM) machines. The aim of modulation function, applied in CM method, is to change the open-slot structure into fully closed-slot structure, whose air-gap flux density is easy to calculate analytically. Therefore, with the help of Matlab Schwarz-Christoffel (SC) Toolbox, both the magnetic flux density and EC density of FTPM machine are obtained accurately. Finally, time-stepped transient finite-element method (FEM) is used to verify the theoretical analysis, showing that the proposed method is able to predict the magnetic flux density and EC loss precisely.

  6. Using multi-scale entropy and principal component analysis to monitor gears degradation via the motor current signature analysis

    NASA Astrophysics Data System (ADS)

    Aouabdi, Salim; Taibi, Mahmoud; Bouras, Slimane; Boutasseta, Nadir

    2017-06-01

    This paper describes an approach for identifying localized gear tooth defects, such as pitting, using phase currents measured from an induction machine driving the gearbox. A new tool of anomaly detection based on multi-scale entropy (MSE) algorithm SampEn which allows correlations in signals to be identified over multiple time scales. The motor current signature analysis (MCSA) in conjunction with principal component analysis (PCA) and the comparison of observed values with those predicted from a model built using nominally healthy data. The Simulation results show that the proposed method is able to detect gear tooth pitting in current signals.

  7. Harmonics analysis of the ITER poloidal field converter based on a piecewise method

    NASA Astrophysics Data System (ADS)

    Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU

    2017-12-01

    Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.

  8. Scalable Kernel Methods and Algorithms for General Sequence Analysis

    ERIC Educational Resources Information Center

    Kuksa, Pavel

    2011-01-01

    Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…

  9. Changing the Latitudes and Attitudes about Content Analysis Research

    ERIC Educational Resources Information Center

    Brank, Eve M.; Fox, Kathleen A.; Youstin, Tasha J.; Boeppler, Lee C.

    2008-01-01

    The current research employs the use of content analysis to teach research methods concepts among students enrolled in an upper division research methods course. Students coded and analyzed Jimmy Buffett song lyrics rather than using a downloadable database or collecting survey data. Students' knowledge of content analysis concepts increased after…

  10. Approaching the Limit in Atomic Spectrochemical Analysis.

    ERIC Educational Resources Information Center

    Hieftje, Gary M.

    1982-01-01

    To assess the ability of current analytical methods to approach the single-atom detection level, theoretical and experimentally determined detection levels are presented for several chemical elements. A comparison of these methods shows that the most sensitive atomic spectrochemical technique currently available is based on emission from…

  11. Heat analysis of thermal overload relays using 3-D finite element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawase, Yoshihiro; Ichihashi, Takayuki; Ito, Shokichi

    1999-05-01

    In designing a thermal overload relay, it is necessary to analyze thermal characteristics of several trial models. Up to now, this has been done by measuring the temperatures on a number of positions in the trial models. This experimental method is undoubtedly expensive. In this paper, the temperature distribution of a thermal overload relay is obtained by using 3-D finite element analysis taking into account the current distribution in current-carrying conductors. It is shown that the 3-D analysis is capable of evaluating a new design of thermal overload relays.

  12. Application of ECT inspection to the first wall of a fusion reactor with wavelet analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G.; Yoshida, Y.; Miya, K.

    1994-12-31

    The first wall of a fusion reactor will be subjected to intensive loads during fusion operations. Since these loads may cause defects in the first wall, nondestructive evaluation techniques of the first wall should be developed. In this paper, we try to apply eddy current testing (ECT) technique to the inspection of the first wall. A method based on current vector potential and wavelet analysis is proposed. Owing to the use of wavelet analysis, a new theory developed recently, the accuracy of the present method is shown to be better than a conventional one.

  13. Correlation between Wavelength Dispersive X-ray Fluorescence (WDXRF) analysis of hardened concrete for chlorides vs. Atomic Absorption (AA) analysis in accordance with AASHTO T- 260; sampling and testing for chloride ion in concrete and concrete raw mater

    DOT National Transportation Integrated Search

    2014-04-01

    A correlation between Wavelength Dispersive X-ray Fluorescence(WDXRF) analysis of Hardened : Concrete for Chlorides and Atomic Absorption (AA) analysis (current method AASHTO T-260, procedure B) has been : found and a new method of analysis has been ...

  14. Bearing failure detection of micro wind turbine via power spectral density analysis for stator current signals spectrum

    NASA Astrophysics Data System (ADS)

    Mahmood, Faleh H.; Kadhim, Hussein T.; Resen, Ali K.; Shaban, Auday H.

    2018-05-01

    The failure such as air gap weirdness, rubbing, and scrapping between stator and rotor generator arise unavoidably and may cause extremely terrible results for a wind turbine. Therefore, we should pay more attention to detect and identify its cause-bearing failure in wind turbine to improve the operational reliability. The current paper tends to use of power spectral density analysis method of detecting internal race and external race bearing failure in micro wind turbine by estimation stator current signal of the generator. The failure detector method shows that it is well suited and effective for bearing failure detection.

  15. Application of the superposition principle to solar-cell analysis

    NASA Technical Reports Server (NTRS)

    Lindholm, F. A.; Fossum, J. G.; Burgess, E. L.

    1979-01-01

    The superposition principle of differential-equation theory - which applies if and only if the relevant boundary-value problems are linear - is used to derive the widely used shifting approximation that the current-voltage characteristic of an illuminated solar cell is the dark current-voltage characteristic shifted by the short-circuit photocurrent. Analytical methods are presented to treat cases where shifting is not strictly valid. Well-defined conditions necessary for superposition to apply are established. For high injection in the base region, the method of analysis accurately yields the dependence of the open-circuit voltage on the short-circuit current (or the illumination level).

  16. Report to the Congress on depreciation recovery periods and methods

    DOT National Transportation Integrated Search

    2000-07-01

    This report provides the results of Treasurys analysis of depreciation recovery periods : and methods under section 168. As discussed in this introduction and in more detail in the : report, an analysis of the current U.S. depreciation system invo...

  17. A Document Analysis of Teacher Evaluation Systems Specific to Physical Education

    ERIC Educational Resources Information Center

    Norris, Jason M.; van der Mars, Hans; Kulinna, Pamela; Kwon, Jayoun; Amrein-Beardsley, Audrey

    2017-01-01

    Purpose: The purpose of this document analysis study was to examine current teacher evaluation systems, understand current practices, and determine whether the instrumentation is a valid measure of teaching quality as reflected in teacher behavior and effectiveness specific to physical education (PE). Method: An interpretive document analysis…

  18. Compositional Analysis of Lignocellulosic Feedstocks. 1. Review and Description of Methods

    PubMed Central

    2010-01-01

    As interest in lignocellulosic biomass feedstocks for conversion into transportation fuels grows, the summative compositional analysis of biomass, or plant-derived material, becomes ever more important. The sulfuric acid hydrolysis of biomass has been used to measure lignin and structural carbohydrate content for more than 100 years. Researchers have applied these methods to measure the lignin and structural carbohydrate contents of woody materials, estimate the nutritional value of animal feed, analyze the dietary fiber content of human food, compare potential biofuels feedstocks, and measure the efficiency of biomass-to-biofuels processes. The purpose of this paper is to review the history and lineage of biomass compositional analysis methods based on a sulfuric acid hydrolysis. These methods have become the de facto procedure for biomass compositional analysis. The paper traces changes to the biomass compositional analysis methods through time to the biomass methods currently used at the National Renewable Energy Laboratory (NREL). The current suite of laboratory analytical procedures (LAPs) offered by NREL is described, including an overview of the procedures and methodologies and some common pitfalls. Suggestions are made for continuing improvement to the suite of analyses. PMID:20669951

  19. The Shock and Vibration Bulletin. Part 2. Invited Papers, Structural Dynamics

    DTIC Science & Technology

    1974-08-01

    VIKING LANDER DYNAMICS 41 Mr. Joseph C. Pohlen, Martin Marietta Aerospace, Denver, Colorado Structural Dynamics PERFORMANCE OF STATISTICAL ENERGY ANALYSIS 47...aerospace structures. Analytical prediction of these environments is beyond the current scope of classical modal techniques. Statistical energy analysis methods...have been developed that circumvent the difficulties of high-frequency nodal analysis. These statistical energy analysis methods are evaluated

  20. SEU System Analysis: Not Just the Sum of All Parts

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Label, Kenneth

    2014-01-01

    Single event upset (SEU) analysis of complex systems is challenging. Currently, system SEU analysis is performed by component level partitioning and then either: the most dominant SEU cross-sections (SEUs) are used in system error rate calculations; or the partition SEUs are summed to eventually obtain a system error rate. In many cases, system error rates are overestimated because these methods generally overlook system level derating factors. The problem with overestimating is that it can cause overdesign and consequently negatively affect the following: cost, schedule, functionality, and validation/verification. The scope of this presentation is to discuss the risks involved with our current scheme of SEU analysis for complex systems; and to provide alternative methods for improvement.

  1. The holy grail of soil metal contamination site assessment: reducing risk and increasing confidence of decision making using infield portable X-ray Fluorescence (pXRF) technology

    NASA Astrophysics Data System (ADS)

    Rouillon, M.; Taylor, M. P.; Dong, C.

    2016-12-01

    This research assesses the advantages of integrating field portable X-ray Fluorescence (pXRF) technology for reducing the risk and increase confidence of decision making for metal-contaminated site assessments. Metal-contaminated sites are often highly heterogeneous and require a high sampling density to accurately characterize the distribution and concentration of contaminants. The current regulatory assessment approaches rely on a small number of samples processed using standard wet-chemistry methods. In New South Wales (NSW), Australia, the current notification trigger for characterizing metal-contaminated sites require the upper 95% confidence interval of the site mean to equal or exceed the relevant guidelines. The method's low `minimum' sampling requirements can misclassify sites due to the heterogeneous nature of soil contamination, leading to inaccurate decision making. To address this issue, we propose integrating infield pXRF analysis with the established sampling method to overcome sampling limitations. This approach increases the minimum sampling resolution and reduces the 95% CI of the site mean. Infield pXRF analysis at contamination hotspots enhances sample resolution efficiently and without the need to return to the site. In this study, the current and proposed pXRF site assessment methods are compared at five heterogeneous metal-contaminated sites by analysing the spatial distribution of contaminants, 95% confidence intervals of site means, and the sampling and analysis uncertainty associated with each method. Finally, an analysis of costs associated with both the current and proposed methods is presented to demonstrate the advantages of incorporating pXRF into metal-contaminated site assessments. The data shows that pXRF integrated site assessments allows for faster, cost-efficient, characterisation of metal-contaminated sites with greater confidence for decision making.

  2. Chapter A5. Section 6.1.F. Wastewater, Pharmaceutical, and Antibiotic Compounds

    USGS Publications Warehouse

    Lewis, Michael Edward; Zaugg, Steven D.

    2003-01-01

    The USGS differentiates between samples collected for analysis of wastewater compounds and those collected for analysis of pharmaceutical and antibiotic compounds, based on the analytical schedule for the laboratory method. Currently, only the wastewater laboratory method for field-filtered samples (SH1433) is an approved, routine (production) method. (The unfiltered wastewater method LC 8033 also is available but requires a proposal for custom analysis.) At this time, analysis of samples for pharmaceutical and antibiotic compounds is confined to research studies and is available only on a custom basis.

  3. Nested PCR and RFLP analysis based on the 16S rRNA gene

    USDA-ARS?s Scientific Manuscript database

    Current phytoplasma detection and identification method is primarily based on nested PCR followed by restriction fragment length polymorphism analysis and gel electrophoresis. This method can potentially detect and differentiate all phytoplasmas including those previously not described. The present ...

  4. Living systematic reviews: 3. Statistical methods for updating meta-analyses.

    PubMed

    Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian

    2017-11-01

    A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Nonclinical dose formulation analysis method validation and sample analysis.

    PubMed

    Whitmire, Monica Lee; Bryan, Peter; Henry, Teresa R; Holbrook, John; Lehmann, Paul; Mollitor, Thomas; Ohorodnik, Susan; Reed, David; Wietgrefe, Holly D

    2010-12-01

    Nonclinical dose formulation analysis methods are used to confirm test article concentration and homogeneity in formulations and determine formulation stability in support of regulated nonclinical studies. There is currently no regulatory guidance for nonclinical dose formulation analysis method validation or sample analysis. Regulatory guidance for the validation of analytical procedures has been developed for drug product/formulation testing; however, verification of the formulation concentrations falls under the framework of GLP regulations (not GMP). The only current related regulatory guidance is the bioanalytical guidance for method validation. The fundamental parameters for bioanalysis and formulation analysis validations that overlap include: recovery, accuracy, precision, specificity, selectivity, carryover, sensitivity, and stability. Divergence in bioanalytical and drug product validations typically center around the acceptance criteria used. As the dose formulation samples are not true "unknowns", the concept of quality control samples that cover the entire range of the standard curve serving as the indication for the confidence in the data generated from the "unknown" study samples may not always be necessary. Also, the standard bioanalytical acceptance criteria may not be directly applicable, especially when the determined concentration does not match the target concentration. This paper attempts to reconcile the different practices being performed in the community and to provide recommendations of best practices and proposed acceptance criteria for nonclinical dose formulation method validation and sample analysis.

  6. Electromotive force analysis of current transformer during lightning surge inflow using Fourier series expansion

    NASA Astrophysics Data System (ADS)

    Kim, Youngsun

    2017-05-01

    The most common structure used for current transformers (CTs) consists of secondary windings around a ferromagnetic core past the primary current being measured. A CT used as a surge protection device (SPD) may experience large inrushes of current, like surges. However, when a large current flows into the primary winding, measuring the magnitude of the current is difficult because the ferromagnetic core becomes magnetically saturated. Several approaches to reduce the saturation effect are described in the literature. A Rogowski coil is representative of several devices that measure large currents. It is an electrical device that measures alternating current (AC) or high-frequency current. However, such devices are very expensive in application. In addition, the volume of a CT must be increased to measure sufficiently large currents, but for installation spaces that are too small, other methods must be used. To solve this problem, it is necessary to analyze the magnetic field and electromotive force (EMF) characteristics when designing a CT. Thus, we proposed an analysis method for the CT under an inrush current using the time-domain finite element method (TDFEM). The input source current of a surge waveform is expanded by a Fourier series to obtain an instantaneous value. An FEM model of the device is derived in a two-dimensional system and coupled with EMF circuits. The time-derivative term in the differential equation is solved in each time step by the finite difference method. It is concluded that the proposed algorithm is useful for analyzing CT characteristics, including the field distribution. Consequently, the proposed algorithm yields a reference for obtaining the effects of design parameters and magnetic materials for special shapes and sizes before the CT is designed and manufactured.

  7. A novel method to predict current voltage characteristics of positive corona discharges based on a perturbation technique. I. Local analysis

    NASA Astrophysics Data System (ADS)

    Shibata, Hisaichi; Takaki, Ryoji

    2017-11-01

    A novel method to compute current-voltage characteristics (CVCs) of direct current positive corona discharges is formulated based on a perturbation technique. We use linearized fluid equations coupled with the linearized Poisson's equation. Townsend relation is assumed to predict CVCs apart from the linearization point. We choose coaxial cylinders as a test problem, and we have successfully predicted parameters which can determine CVCs with arbitrary inner and outer radii. It is also confirmed that the proposed method essentially does not induce numerical instabilities.

  8. A method for data base management and analysis for wind tunnel data

    NASA Technical Reports Server (NTRS)

    Biser, Aileen O.

    1987-01-01

    To respond to the need for improved data base management and analysis capabilities for wind-tunnel data at the Langley 16-Foot Transonic Tunnel, research was conducted into current methods of managing wind-tunnel data and a method was developed as a solution to this need. This paper describes the development of the data base management and analysis method for wind-tunnel data. The design and implementation of the software system are discussed and examples of its use are shown.

  9. A Meta-Analytic Review of Research on Gender Differences in Sexuality, 1993-2007

    ERIC Educational Resources Information Center

    Petersen, Jennifer L.; Hyde, Janet Shibley

    2010-01-01

    In 1993 Oliver and Hyde conducted a meta-analysis on gender differences in sexuality. The current study updated that analysis with current research and methods. Evolutionary psychology, cognitive social learning theory, social structural theory, and the gender similarities hypothesis provided predictions about gender differences in sexuality. We…

  10. Temperature analysis with voltage-current time differential operation of electrochemical sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woo, Leta Yar-Li; Glass, Robert Scott; Fitzpatrick, Joseph Jay

    A method for temperature analysis of a gas stream. The method includes identifying a temperature parameter of an affected waveform signal. The method also includes calculating a change in the temperature parameter by comparing the affected waveform signal with an original waveform signal. The method also includes generating a value from the calculated change which corresponds to the temperature of the gas stream.

  11. A Comparison of Measurement Equivalence Methods Based on Confirmatory Factor Analysis and Item Response Theory.

    ERIC Educational Resources Information Center

    Flowers, Claudia P.; Raju, Nambury S.; Oshima, T. C.

    Current interest in the assessment of measurement equivalence emphasizes two methods of analysis, linear, and nonlinear procedures. This study simulated data using the graded response model to examine the performance of linear (confirmatory factor analysis or CFA) and nonlinear (item-response-theory-based differential item function or IRT-Based…

  12. A New Approach to Aircraft Robust Performance Analysis

    NASA Technical Reports Server (NTRS)

    Gregory, Irene M.; Tierno, Jorge E.

    2004-01-01

    A recently developed algorithm for nonlinear system performance analysis has been applied to an F16 aircraft to begin evaluating the suitability of the method for aerospace problems. The algorithm has a potential to be much more efficient than the current methods in performance analysis for aircraft. This paper is the initial step in evaluating this potential.

  13. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion.

    PubMed

    Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.

  14. Evaluation of Analysis Techniques for Fluted-Core Sandwich Cylinders

    NASA Technical Reports Server (NTRS)

    Lovejoy, Andrew E.; Schultz, Marc R.

    2012-01-01

    Buckling-critical launch-vehicle structures require structural concepts that have high bending stiffness and low mass. Fluted-core, also known as truss-core, sandwich construction is one such concept. In an effort to identify an analysis method appropriate for the preliminary design of fluted-core cylinders, the current paper presents and compares results from several analysis techniques applied to a specific composite fluted-core test article. The analysis techniques are evaluated in terms of their ease of use and for their appropriateness at certain stages throughout a design analysis cycle (DAC). Current analysis techniques that provide accurate determination of the global buckling load are not readily applicable early in the DAC, such as during preliminary design, because they are too costly to run. An analytical approach that neglects transverse-shear deformation is easily applied during preliminary design, but the lack of transverse-shear deformation results in global buckling load predictions that are significantly higher than those from more detailed analysis methods. The current state of the art is either too complex to be applied for preliminary design, or is incapable of the accuracy required to determine global buckling loads for fluted-core cylinders. Therefore, it is necessary to develop an analytical method for calculating global buckling loads of fluted-core cylinders that includes transverse-shear deformations, and that can be easily incorporated in preliminary design.

  15. Magnetic force microscopy method and apparatus to detect and image currents in integrated circuits

    DOEpatents

    Campbell, Ann. N.; Anderson, Richard E.; Cole, Jr., Edward I.

    1995-01-01

    A magnetic force microscopy method and improved magnetic tip for detecting and quantifying internal magnetic fields resulting from current of integrated circuits. Detection of the current is used for failure analysis, design verification, and model validation. The interaction of the current on the integrated chip with a magnetic field can be detected using a cantilevered magnetic tip. Enhanced sensitivity for both ac and dc current and voltage detection is achieved with voltage by an ac coupling or a heterodyne technique. The techniques can be used to extract information from analog circuits.

  16. Magnetic force microscopy method and apparatus to detect and image currents in integrated circuits

    DOEpatents

    Campbell, A.N.; Anderson, R.E.; Cole, E.I. Jr.

    1995-11-07

    A magnetic force microscopy method and improved magnetic tip for detecting and quantifying internal magnetic fields resulting from current of integrated circuits are disclosed. Detection of the current is used for failure analysis, design verification, and model validation. The interaction of the current on the integrated chip with a magnetic field can be detected using a cantilevered magnetic tip. Enhanced sensitivity for both ac and dc current and voltage detection is achieved with voltage by an ac coupling or a heterodyne technique. The techniques can be used to extract information from analog circuits. 17 figs.

  17. Alternative Internal Standard Calibration of an Indirect Enzymatic Analytical Method for 2-MCPD Fatty Acid Esters.

    PubMed

    Koyama, Kazuo; Miyazaki, Kinuko; Abe, Kousuke; Egawa, Yoshitsugu; Fukazawa, Toru; Kitta, Tadashi; Miyashita, Takashi; Nezu, Toru; Nohara, Hidenori; Sano, Takashi; Takahashi, Yukinari; Taniguchi, Hideji; Yada, Hiroshi; Yamazaki, Kumiko; Watanabe, Yomi

    2017-06-01

    An indirect enzymatic analysis method for the quantification of fatty acid esters of 2-/3-monochloro-1,2-propanediol (2/3-MCPD) and glycidol was developed, using the deuterated internal standard of each free-form component. A statistical method for calibration and quantification of 2-MCPD-d 5 , which is difficult to obtain, is substituted by 3-MCPD-d 5 used for calculation of 3-MCPD. Using data from a previous collaborative study, the current method for the determination of 2-MCPD content using 2-MCPD-d 5 was compared to three alternative new methods using 3-MCPD-d 5 . The regression analysis showed that the alternative methods were unbiased compared to the current method. The relative standard deviation (RSD R ) among the testing laboratories was ≤ 15% and the Horwitz ratio was ≤ 1.0, a satisfactory value.

  18. CHI during an ohmic discharge in HIT-II

    NASA Astrophysics Data System (ADS)

    Mueller, Dennis; Nelson, Brian A.; Redd, Aaron J.; Hamp, William T.

    2004-11-01

    Coaxial Helicity Injection (CHI) has been used on the National Spherical Torus Experiment (NSTX), the Helicity Injected Torus (HIT) and HIT-II to initiate plasma and to drive up to 400 kA of toroidal current. The primary goal of the CHI systems is to provide a start-up plasma with substantial toroidal current that can be heated and sustained with other methods. We have investigated the use of CHI systems to add current to an established, inductively driven plasma. This may be an attractive method to add edge current that may modify the stability characteristics of the discharge or modify the particle and energy transport in a spherical torus. For example, divertor biasing experiments have been successful in modifying particle and energy transport in the scrape-off layer of tokamaks. Use of IGBT power supplies to modulate the injector current makes analysis of current penetration feasible by comparisons of before and after CHI using EFIT analysis of the data.

  19. Next generation system modeling of NTR systems

    NASA Technical Reports Server (NTRS)

    Buksa, John J.; Rider, William J.

    1993-01-01

    The topics are presented in viewgraph form and include the following: nuclear thermal rocket (NTR) modeling challenges; current approaches; shortcomings of current analysis method; future needs; and present steps to these goals.

  20. Buckling analysis and test correlation of hat stiffened panels for hypersonic vehicles

    NASA Technical Reports Server (NTRS)

    Percy, Wendy C.; Fields, Roger A.

    1990-01-01

    The paper discusses the design, analysis, and test of hat stiffened panels subjected to a variety of thermal and mechanical load conditions. The panels were designed using data from structural optimization computer codes and finite element analysis. Test methods included the grid shadow moire method and a single gage force stiffness method. The agreement between the test data and analysis provides confidence in the methods that are currently being used to design structures for hypersonic vehicles. The agreement also indicates that post buckled strength may potentially be used to reduce the vehicle weight.

  1. Mycotoxin analysis: an update.

    PubMed

    Krska, Rudolf; Schubert-Ullrich, Patricia; Molinelli, Alexandra; Sulyok, Michael; MacDonald, Susan; Crews, Colin

    2008-02-01

    Mycotoxin contamination of cereals and related products used for feed can cause intoxication, especially in farm animals. Therefore, efficient analytical tools for the qualitative and quantitative analysis of toxic fungal metabolites in feed are required. Current methods usually include an extraction step, a clean-up step to reduce or eliminate unwanted co-extracted matrix components and a separation step with suitably specific detection ability. Quantitative methods of analysis for most mycotoxins use immunoaffinity clean-up with high-performance liquid chromatography (HPLC) separation in combination with UV and/or fluorescence detection. Screening of samples contaminated with mycotoxins is frequently performed by thin layer chromatography (TLC), which yields qualitative or semi-quantitative results. Nowadays, enzyme-linked immunosorbent assays (ELISA) are often used for rapid screening. A number of promising methods, such as fluorescence polarization immunoassays, dipsticks, and even newer methods such as biosensors and non-invasive techniques based on infrared spectroscopy, have shown great potential for mycotoxin analysis. Currently, there is a strong trend towards the use of multi-mycotoxin methods for the simultaneous analysis of several of the important Fusarium mycotoxins, which is best achieved by LC-MS/MS (liquid chromatography with tandem mass spectrometry). This review focuses on recent developments in the determination of mycotoxins with a special emphasis on LC-MS/MS and emerging rapid methods.

  2. Three-dimensional Numerical Analysis on Blade Response of Vertical Axis Tidal Current Turbine Under Operational Condition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ye; Karri, Naveen K.; Wang, Qi

    Tidal power as a large-scale renewable source of energy has been receiving significant attention recently because of its advantages over the wind and other renewal energy sources. The technology used to harvest energy from tidal current is called a tidal current turbine. Though some of the principles of wind turbine design are applicable to tidal current turbines, the design of latter ones need additional considerations like cavitation damage, corrosion etc. for the long-term reliability of such turbines. Depending up on the orientation of axis, tidal current turbines can be classified as vertical axis turbines or horizontal axis turbines. Existing studiesmore » on the vertical axis tidal current turbine focus more on the hydrodynamic aspects of the turbine rather than the structural aspects. This paper summarizes our recent efforts to study the integrated hydrodynamic and structural aspects of the vertical axis tidal current turbines. After reviewing existing methods in modeling tidal current turbines, we developed a hybrid approach that combines discrete vortex method -finite element method that can simulate the integrated hydrodynamic and structural response of a vertical axis turbine. This hybrid method was initially employed to analyze a typical three-blade vertical axis turbine. The power coefficient was used to evaluate the hydrodynamic performance, and critical deflection was considered to evaluate the structural reliability. A sensitivity analysis was also conducted with various turbine height-to-radius ratios. The results indicate that both the power output and failure probability increase with the turbine height, suggesting a necessity for optimal design. An attempt to optimize a 3-blade vertical axis turbine design with hybrid method yielded a ratio of turbine height to radius (H/R) about 3.0 for reliable maximum power output.« less

  3. Improvement of calculation method for electrical parameters of short network of ore-thermal furnaces

    NASA Astrophysics Data System (ADS)

    Aliferov, A. I.; Bikeev, R. A.; Goreva, L. P.

    2017-10-01

    The paper describes a new calculation method for active and inductive resistance of split interleaved current leads packages in ore-thermal electric furnaces. The method is developed on basis of regression analysis of dependencies of active and inductive resistances of the packages on their geometrical parameters, mutual disposition and interleaving pattern. These multi-parametric calculations have been performed with ANSYS software. The proposed method allows solving split current lead electrical parameters minimization and balancing problems for ore-thermal furnaces.

  4. Summary of research in applied mathematics, numerical analysis, and computer sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.

  5. Annual Book of ASTM Standards, Part 23: Water; Atmospheric Analysis.

    ERIC Educational Resources Information Center

    American Society for Testing and Materials, Philadelphia, PA.

    Standards for water and atmospheric analysis are compiled in this segment, Part 23, of the American Society for Testing and Materials (ASTM) annual book of standards. It contains all current formally approved ASTM standard and tentative test methods, definitions, recommended practices, proposed methods, classifications, and specifications. One…

  6. A phantom-based JAFROC observer study of two CT reconstruction methods: the search for optimisation of lesion detection and effective dose

    NASA Astrophysics Data System (ADS)

    Thompson, John D.; Chakraborty, Dev P.; Szczepura, Katy; Vamvakas, Ioannis; Tootell, Andrew; Manning, David J.; Hogg, Peter

    2015-03-01

    Purpose: To investigate the dose saving potential of iterative reconstruction (IR) in a computed tomography (CT) examination of the thorax. Materials and Methods: An anthropomorphic chest phantom containing various configurations of simulated lesions (5, 8, 10 and 12mm; +100, -630 and -800 Hounsfield Units, HU) was imaged on a modern CT system over a tube current range (20, 40, 60 and 80mA). Images were reconstructed with (IR) and filtered back projection (FBP). An ATOM 701D (CIRS, Norfolk, VA) dosimetry phantom was used to measure organ dose. Effective dose was calculated. Eleven observers (15.11+/-8.75 years of experience) completed a free response study, localizing lesions in 544 single CT image slices. A modified jackknife alternative free-response receiver operating characteristic (JAFROC) analysis was completed to look for a significant effect of two factors: reconstruction method and tube current. Alpha was set at 0.05 to control the Type I error in this study. Results: For modified JAFROC analysis of reconstruction method there was no statistically significant difference in lesion detection performance between FBP and IR when figures-of-merit were averaged over tube current (F(1,10)=0.08, p = 0.789). For tube current analysis, significant differences were revealed between multiple pairs of tube current settings (F(3,10) = 16.96, p<0.001) when averaged over image reconstruction method. Conclusion: The free-response study suggests that lesion detection can be optimized at 40mA in this phantom model, a measured effective dose of 0.97mSv. In high-contrast regions the diagnostic value of IR, compared to FBP, is less clear.

  7. PSF mapping-based correction of eddy-current-induced distortions in diffusion-weighted echo-planar imaging.

    PubMed

    In, Myung-Ho; Posnansky, Oleg; Speck, Oliver

    2016-05-01

    To accurately correct diffusion-encoding direction-dependent eddy-current-induced geometric distortions in diffusion-weighted echo-planar imaging (DW-EPI) and to minimize the calibration time at 7 Tesla (T). A point spread function (PSF) mapping based eddy-current calibration method is newly presented to determine eddy-current-induced geometric distortions even including nonlinear eddy-current effects within the readout acquisition window. To evaluate the temporal stability of eddy-current maps, calibration was performed four times within 3 months. Furthermore, spatial variations of measured eddy-current maps versus their linear superposition were investigated to enable correction in DW-EPIs with arbitrary diffusion directions without direct calibration. For comparison, an image-based eddy-current correction method was additionally applied. Finally, this method was combined with a PSF-based susceptibility-induced distortion correction approach proposed previously to correct both susceptibility and eddy-current-induced distortions in DW-EPIs. Very fast eddy-current calibration in a three-dimensional volume is possible with the proposed method. The measured eddy-current maps are very stable over time and very similar maps can be obtained by linear superposition of principal-axes eddy-current maps. High resolution in vivo brain results demonstrate that the proposed method allows more efficient eddy-current correction than the image-based method. The combination of both PSF-based approaches allows distortion-free images, which permit reliable analysis in diffusion tensor imaging applications at 7T. © 2015 Wiley Periodicals, Inc.

  8. Current Applications of Chromatographic Methods in the Study of Human Body Fluids for Diagnosing Disorders.

    PubMed

    Jóźwik, Jagoda; Kałużna-Czaplińska, Joanna

    2016-01-01

    Currently, analysis of various human body fluids is one of the most essential and promising approaches to enable the discovery of biomarkers or pathophysiological mechanisms for disorders and diseases. Analysis of these fluids is challenging due to their complex composition and unique characteristics. Development of new analytical methods in this field has made it possible to analyze body fluids with higher selectivity, sensitivity, and precision. The composition and concentration of analytes in body fluids are most often determined by chromatography-based techniques. There is no doubt that proper use of knowledge that comes from a better understanding of the role of body fluids requires the cooperation of scientists of diverse specializations, including analytical chemists, biologists, and physicians. This article summarizes current knowledge about the application of different chromatographic methods in analyses of a wide range of compounds in human body fluids in order to diagnose certain diseases and disorders.

  9. A new technique for Auger analysis of surface species subject to electron-induced desorption.

    NASA Technical Reports Server (NTRS)

    Pepper, S. V.

    1973-01-01

    A method is presented to observe surface species subject to electron-induced desorption by Auger electron spectroscopy. The surface to be examined is moved under the electron beam at constant velocity, establishing a time-independent condition and eliminating the time response of the electron spectrometer as a limiting factor. The dependence of the Auger signal on the sample velocity, incident electron current, beam diameter, and desorption cross section is analyzed. It is shown that it is advantageous to analyze the moving sample with a high beam current, in contrast to the usual practice of using a low beam current to minimize desorption from a stationary sample. The method is illustrated by the analysis of a friction transfer film of PTFE, in which the fluorine is removed by electron-induced desorption. The method is relevant to surface studies in the field of lubrication and catalysis.

  10. Integrative Analysis of “-Omics” Data Using Penalty Functions

    PubMed Central

    Zhao, Qing; Shi, Xingjie; Huang, Jian; Liu, Jin; Li, Yang; Ma, Shuangge

    2014-01-01

    In the analysis of omics data, integrative analysis provides an effective way of pooling information across multiple datasets or multiple correlated responses, and can be more effective than single-dataset (response) analysis. Multiple families of integrative analysis methods have been proposed in the literature. The current review focuses on the penalization methods. Special attention is paid to sparse meta-analysis methods that pool summary statistics across datasets, and integrative analysis methods that pool raw data across datasets. We discuss their formulation and rationale. Beyond “standard” penalized selection, we also review contrasted penalization and Laplacian penalization which accommodate finer data structures. The computational aspects, including computational algorithms and tuning parameter selection, are examined. This review concludes with possible limitations and extensions. PMID:25691921

  11. Finite element modeling of truss structures with frequency-dependent material damping

    NASA Technical Reports Server (NTRS)

    Lesieutre, George A.

    1991-01-01

    A physically motivated modelling technique for structural dynamic analysis that accommodates frequency dependent material damping was developed. Key features of the technique are the introduction of augmenting thermodynamic fields (AFT) to interact with the usual mechanical displacement field, and the treatment of the resulting coupled governing equations using finite element analysis methods. The AFT method is fully compatible with current structural finite element analysis techniques. The method is demonstrated in the dynamic analysis of a 10-bay planar truss structure, a structure representative of those contemplated for use in future space systems.

  12. Evaluation of use of MPAD trajectory tape and number of orbit points for orbiter mission thermal predictions

    NASA Technical Reports Server (NTRS)

    Vogt, R. A.

    1979-01-01

    The application of using the mission planning and analysis division (MPAD) common format trajectory data tape to predict temperatures for preflight and post flight mission analysis is presented and evaluated. All of the analyses utilized the latest Space Transportation System 1 flight (STS-1) MPAD trajectory tape, and the simplified '136 note' midsection/payload bay thermal math model. For the first 6.7 hours of the STS-1 flight profile, transient temperatures are presented for selected nodal locations with the current standard method, and the trajectory tape method. Whether the differences are considered significant or not depends upon the view point. Other transient temperature predictions are also presented. These results were obtained to investigate an initial concern that perhaps the predicted temperature differences between the two methods would not only be caused by the inaccuracies of the current method's assumed nominal attitude profile but also be affected by a lack of a sufficient number of orbit points in the current method. Comparison between 6, 12, and 24 orbit point parameters showed a surprising insensitivity to the number of orbit points.

  13. Mixed-methods research in nursing - a critical review.

    PubMed

    Bressan, Valentina; Bagnasco, Annamaria; Aleo, Giuseppe; Timmins, Fiona; Barisone, Michela; Bianchi, Monica; Pellegrini, Ramona; Sasso, Loredana

    2017-10-01

    To review the use of mixed-methods research in nursing with a particular focus on the extent to which current practice informs nurse researchers. It also aimed to highlight gaps in current knowledge, understanding and reporting of this type of research. Mixed-methods research is becoming increasingly popular among nurses and healthcare professionals. Emergent findings from this type of research are very useful for nurses in practice. The combination of both quantitative and qualitative methods provides a scientific base for practice but also richness from the qualitative enquiry. However, at the same time mixed-methods research is underdeveloped. This study identified mixed-methods research papers and critically evaluated their usefulness for research practice. To support the analysis, we performed a two-stage search using CINAHL to find papers with titles that included the key term 'mixed method'. An analysis of studies that used mixed-methods research revealed some inconsistencies in application and reporting. Attempts to use two distinct research methods in these studies often meant that one or both aspects had limitations. Overall methods were applied in a less rigorous way. This has implications for providing somewhat limited direction for novice researchers. There is also potential for application of evidence in healthcare practice that limited validity. This study highlights current gaps in knowledge, understanding and reporting of mixed-methods research. While these methods are useful to gain insight into clinical problems nurses lack guidance with this type of research. This study revealed that the guidance provided by current mixed-methods research is inconsistent and incomplete and this compounds the lack of available direction. There is an urgent need to develop robust guidelines for using mixed-methods research so that findings may be critically implemented in practice. © 2016 John Wiley & Sons Ltd.

  14. Microfluidic approaches to malaria detection

    PubMed Central

    Gascoyne, Peter; Satayavivad, Jutamaad; Ruchirawat, Mathuros

    2009-01-01

    Microfluidic systems are under development to address a variety of medical problems. Key advantages of micrototal analysis systems based on microfluidic technology are the promise of small size and the integration of sample handling and measurement functions within a single, automated device having low mass-production costs. Here, we review the spectrum of methods currently used to detect malaria, consider their advantages and disadvantages, and discuss their adaptability towards integration into small, automated micro total analysis systems. Molecular amplification methods emerge as leading candidates for chip-based systems because they offer extremely high sensitivity, the ability to recognize malaria species and strain, and they will be adaptable to the detection of new genotypic signatures that will emerge from current genomic-based research of the disease. Current approaches to the development of chip-based molecular amplification are considered with special emphasis on flow-through PCR, and we present for the first time the method of malaria specimen preparation by dielectrophoretic field-flow-fractionation. Although many challenges must be addressed to realize a micrototal analysis system for malaria diagnosis, it is concluded that the potential benefits of the approach are well worth pursuing. PMID:14744562

  15. Ion diffusion may introduce spurious current sources in current-source density (CSD) analysis.

    PubMed

    Halnes, Geir; Mäki-Marttunen, Tuomo; Pettersen, Klas H; Andreassen, Ole A; Einevoll, Gaute T

    2017-07-01

    Current-source density (CSD) analysis is a well-established method for analyzing recorded local field potentials (LFPs), that is, the low-frequency part of extracellular potentials. Standard CSD theory is based on the assumption that all extracellular currents are purely ohmic, and thus neglects the possible impact from ionic diffusion on recorded potentials. However, it has previously been shown that in physiological conditions with large ion-concentration gradients, diffusive currents can evoke slow shifts in extracellular potentials. Using computer simulations, we here show that diffusion-evoked potential shifts can introduce errors in standard CSD analysis, and can lead to prediction of spurious current sources. Further, we here show that the diffusion-evoked prediction errors can be removed by using an improved CSD estimator which accounts for concentration-dependent effects. NEW & NOTEWORTHY Standard CSD analysis does not account for ionic diffusion. Using biophysically realistic computer simulations, we show that unaccounted-for diffusive currents can lead to the prediction of spurious current sources. This finding may be of strong interest for in vivo electrophysiologists doing extracellular recordings in general, and CSD analysis in particular. Copyright © 2017 the American Physiological Society.

  16. Decoding of Ankle Flexion and Extension from Cortical Current Sources Estimated from Non-invasive Brain Activity Recording Methods.

    PubMed

    Mejia Tobar, Alejandra; Hyoudou, Rikiya; Kita, Kahori; Nakamura, Tatsuhiro; Kambara, Hiroyuki; Ogata, Yousuke; Hanakawa, Takashi; Koike, Yasuharu; Yoshimura, Natsue

    2017-01-01

    The classification of ankle movements from non-invasive brain recordings can be applied to a brain-computer interface (BCI) to control exoskeletons, prosthesis, and functional electrical stimulators for the benefit of patients with walking impairments. In this research, ankle flexion and extension tasks at two force levels in both legs, were classified from cortical current sources estimated by a hierarchical variational Bayesian method, using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) recordings. The hierarchical prior for the current source estimation from EEG was obtained from activated brain areas and their intensities from an fMRI group (second-level) analysis. The fMRI group analysis was performed on regions of interest defined over the primary motor cortex, the supplementary motor area, and the somatosensory area, which are well-known to contribute to movement control. A sparse logistic regression method was applied for a nine-class classification (eight active tasks and a resting control task) obtaining a mean accuracy of 65.64% for time series of current sources, estimated from the EEG and the fMRI signals using a variational Bayesian method, and a mean accuracy of 22.19% for the classification of the pre-processed of EEG sensor signals, with a chance level of 11.11%. The higher classification accuracy of current sources, when compared to EEG classification accuracy, was attributed to the high number of sources and the different signal patterns obtained in the same vertex for different motor tasks. Since the inverse filter estimation for current sources can be done offline with the present method, the present method is applicable to real-time BCIs. Finally, due to the highly enhanced spatial distribution of current sources over the brain cortex, this method has the potential to identify activation patterns to design BCIs for the control of an affected limb in patients with stroke, or BCIs from motor imagery in patients with spinal cord injury.

  17. Concept analysis and the building blocks of theory: misconceptions regarding theory development.

    PubMed

    Bergdahl, Elisabeth; Berterö, Carina M

    2016-10-01

    The purpose of this article is to discuss the attempts to justify concepts analysis as a way to construct theory - a notion often advocated in nursing. The notion that concepts are the building blocks or threads from which theory is constructed is often repeated. It can be found in many articles and well-known textbooks. However, this notion is seldom explained or defended. The notion of concepts as building blocks has also been questioned by several authors. However, most of these authors seem to agree to some degree that concepts are essential components from which theory is built. Discussion paper. Literature was reviewed to synthesize and debate current knowledge. Our point is that theory is not built by concepts analysis or clarification and we will show that this notion has its basis in some serious misunderstandings. We argue that concept analysis is not a part of sound scientific method and should be abandoned. The current methods of concept analysis in nursing have no foundation in philosophy of science or in language philosophy. The type of concept analysis performed in nursing is not a way to 'construct' theory. Rather, theories are formed by creative endeavour to propose a solution to a scientific and/or practical problem. The bottom line is that the current style and form of concept analysis in nursing should be abandoned in favour of methods in line with modern theory of science. © 2016 John Wiley & Sons Ltd.

  18. The Application of Deterministic Spectral Domain Method to the Analysis of Planar Circuit Discontinuities on Open Substrates

    DTIC Science & Technology

    1990-08-01

    the spectral domain is extended to include the effects of two-dimensional, two-component current flow in planar transmission line discontinuities 6n...PROFESSOR: Tatsuo Itoh A deterministic formulation of the method of moments carried out in the spectral domain is extended to include the effects of...two-dimensional, two- component current flow in planar transmission line discontinuities on open substrates. The method includes the effects of space

  19. Review of life-cycle approaches coupled with data envelopment analysis: launching the CFP + DEA method for energy policy making.

    PubMed

    Vázquez-Rowe, Ian; Iribarren, Diego

    2015-01-01

    Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting.

  20. Review of Life-Cycle Approaches Coupled with Data Envelopment Analysis: Launching the CFP + DEA Method for Energy Policy Making

    PubMed Central

    Vázquez-Rowe, Ian

    2015-01-01

    Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting. PMID:25654136

  1. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion

    PubMed Central

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002

  2. Analysis and Countermeasure Study on DC Bias of Main Transformer in a City

    NASA Astrophysics Data System (ADS)

    Wang, PengChao; Wang, Hongtao; Song, Xinpu; Gu, Jun; Liu, yong; Wu, weili

    2017-07-01

    According to the December 2015 Guohua Beijing thermal power transformer DC magnetic bias phenomenon, the monitoring data of 24 hours of direct current is analyzed. We find that the maximum DC current is up to 25 and is about 30s for the trend cycle, on this basis, then, of the geomagnetic storm HVDC and subway operation causes comparison of the mechanism, and make a comprehensive analysis of the thermal power plant’s geographical location, surrounding environment and electrical contact etc.. The results show that the main reason for the DC bias of Guohua thermal power transformer is the operation of the subway, and the change of the DC bias current is periodic. Finally, of Guohua thermal power transformer DC magnetic bias control method is studied, the simulation results show that the method of using neutral point with small resistance or capacitance can effectively inhibit the main transformer neutral point current.

  3. A combined experimental and finite element analysis method for the estimation of eddy-current loss in NdFeB magnets.

    PubMed

    Fratila, Radu; Benabou, Abdelkader; Tounzi, Abdelmounaïm; Mipo, Jean-Claude

    2014-05-14

    NdFeB permanent magnets (PMs) are widely used in high performance electrical machines, but their relatively high conductivity subjects them to eddy current losses that can lead to magnetization loss. The Finite Element (FE) method is generally used to quantify the eddy current loss of PMs, but it remains quite difficult to validate the accuracy of the results with complex devices. In this paper, an experimental test device is used in order to extract the eddy current losses that are then compared with those of a 3D FE model.

  4. Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.

    PubMed

    OConnor, William; Runquist, Elizabeth A

    2008-07-01

    Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.

  5. The influence of eddy currents on magnetic actuator performance

    NASA Technical Reports Server (NTRS)

    Zmood, R. B.; Anand, D. K.; Kirk, J. A.

    1987-01-01

    The present investigation of the effects of eddy currents on EM actuators' transient performance notes that a transfer function representation encompassing a first-order model of the eddy current influence can be useful in control system analysis. The method can be extended to represent the higher-order effects of eddy currents for actuators that cannot be represented by semiinfinite planes.

  6. Measurement of edge residual stresses in glass by the phase-shifting method

    NASA Astrophysics Data System (ADS)

    Ajovalasit, A.; Petrucci, G.; Scafidi, M.

    2011-05-01

    Control and measurement of residual stress in glass is of great importance in the industrial field. Since glass is a birefringent material, the residual stress analysis is based mainly on the photoelastic method. This paper considers two methods of automated analysis of membrane residual stress in glass sheets, based on the phase-shifting concept in monochromatic light. In particular these methods are the automated versions of goniometric compensation methods of Tardy and Sénarmont. The proposed methods can effectively replace manual methods of compensation (goniometric compensation of Tardy and Sénarmont, Babinet and Babinet-Soleil compensators) provided by current standards on the analysis of residual stresses in glasses.

  7. Review of Railgun Modeling Techniques: The Computation of Railgun Force and Other Key Factors

    NASA Astrophysics Data System (ADS)

    Eckert, Nathan James

    Currently, railgun force modeling either uses the simple "railgun force equation" or finite element methods. It is proposed here that a middle ground exists that does not require the solution of partial differential equations, is more readily implemented than finite element methods, and is more accurate than the traditional force equation. To develop this method, it is necessary to examine the core railgun factors: power supply mechanisms, the distribution of current in the rails and in the projectile which slides between them (called the armature), the magnetic field created by the current flowing through these rails, the inductance gradient (a key factor in simplifying railgun analysis, referred to as L'), the resultant Lorentz force, and the heating which accompanies this action. Common power supply technologies are investigated, and the shape of their current pulses are modeled. The main causes of current concentration are described, and a rudimentary method for computing current distribution in solid rails and a rectangular armature is shown to have promising accuracy with respect to outside finite element results. The magnetic field is modeled with two methods using the Biot-Savart law, and generally good agreement is obtained with respect to finite element methods (5.8% error on average). To get this agreement, a factor of 2 is added to the original formulation after seeing a reliable offset with FEM results. Three inductance gradient calculations are assessed, and though all agree with FEM results, the Kerrisk method and a regression analysis method developed by Murugan et al. (referred to as the LRM here) perform the best. Six railgun force computation methods are investigated, including the traditional railgun force equation, an equation produced by Waindok and Piekielny, and four methods inspired by the work of Xu et al. Overall, good agreement between the models and outside data is found, but each model's accuracy varies significantly between comparisons. Lastly, an approximation of the temperature profile in railgun rails originally presented by McCorkle and Bahder is replicated. In total, this work describes railgun technology and moderately complex railgun modeling methods, but is inconclusive about the presence of a middle-ground modeling method.

  8. Computer-Aided Design of Low-Noise Microwave Circuits

    NASA Astrophysics Data System (ADS)

    Wedge, Scott William

    1991-02-01

    Devoid of most natural and manmade noise, microwave frequencies have detection sensitivities limited by internally generated receiver noise. Low-noise amplifiers are therefore critical components in radio astronomical antennas, communications links, radar systems, and even home satellite dishes. A general technique to accurately predict the noise performance of microwave circuits has been lacking. Current noise analysis methods have been limited to specific circuit topologies or neglect correlation, a strong effect in microwave devices. Presented here are generalized methods, developed for computer-aided design implementation, for the analysis of linear noisy microwave circuits comprised of arbitrarily interconnected components. Included are descriptions of efficient algorithms for the simultaneous analysis of noisy and deterministic circuit parameters based on a wave variable approach. The methods are therefore particularly suited to microwave and millimeter-wave circuits. Noise contributions from lossy passive components and active components with electronic noise are considered. Also presented is a new technique for the measurement of device noise characteristics that offers several advantages over current measurement methods.

  9. An onboard data analysis method to track the seasonal polar caps on Mars

    USGS Publications Warehouse

    Wagstaff, K.L.; Castano, R.; Chien, S.; Ivanov, A.B.; Pounders, E.; Titus, T.N.; ,

    2005-01-01

    The Martian seasonal CO2 ice caps advance and retreat each year. They are currently studied using instruments such as the THermal EMission Imaging System (THEMIS), a visible and infra-red camera on the Mars Odyssey spacecraft [1]. However, each image must be downlinked to Earth prior to analysis. In contrast, we have developed the Bimodal Image Temperature (BIT) histogram analysis method for onboard detection of the cap edge, before transmission. In downlink-limited scenarios when the entire image cannot be transmitted, the location of the cap edge can still be identified and sent to Earth. In this paper, we evaluate our method on uncalibrated THEMIS data and find 1) agreement with manual cap edge identifications to within 28.2 km, and 2) high accuracy even with a smaller analysis window, yielding large reductions in memory requirements. This algorithm is currently being considered as a capability enhancement for the Odyssey second extended mission, beginning in fall 2006.

  10. Comparison between the analysis of the loudness dependency of the auditory N1/P2 component with LORETA and dipole source analysis in the prediction of treatment response to the selective serotonin reuptake inhibitor citalopram in major depression.

    PubMed

    Mulert, C; Juckel, G; Augustin, H; Hegerl, U

    2002-10-01

    The loudness dependency of the auditory evoked potentials (LDAEP) is used as an indicator of the central serotonergic system and predicts clinical response to serotonin agonists. So far, LDAEP has been typically investigated with dipole source analysis, because with this method the primary and secondary auditory cortex (with a high versus low serotonergic innervation) can be separated at least in parts. We have developed a new analysis procedure that uses an MRI probabilistic map of the primary auditory cortex in Talairach space and analyzed the current density in this region of interest with low resolution electromagnetic tomography (LORETA). LORETA is a tomographic localization method that calculates the current density distribution in Talairach space. In a group of patients with major depression (n=15), this new method can predict the response to an selective serotonin reuptake inhibitor (citalopram) at least to the same degree than the traditional dipole source analysis method (P=0.019 vs. P=0.028). The correlation of the improvement in the Hamilton Scale is significant with the LORETA-LDAEP-values (0.56; P=0.031) but not with the dipole source analysis LDAEP-values (0.43; P=0.11). The new tomographic LDAEP analysis is a promising tool in the analysis of the central serotonergic system.

  11. Current Directions in Mediation Analysis

    PubMed Central

    MacKinnon, David P.; Fairchild, Amanda J.

    2010-01-01

    Mediating variables continue to play an important role in psychological theory and research. A mediating variable transmits the effect of an antecedent variable on to a dependent variable, thereby providing more detailed understanding of relations among variables. Methods to assess mediation have been an active area of research for the last two decades. This paper describes the current state of methods to investigate mediating variables. PMID:20157637

  12. MNE software for processing MEG and EEG data

    PubMed Central

    Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Parkkonen, L.; Hämäläinen, M.

    2013-01-01

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals originating from neural currents in the brain. Using these signals to characterize and locate brain activity is a challenging task, as evidenced by several decades of methodological contributions. MNE, whose name stems from its capability to compute cortically-constrained minimum-norm current estimates from M/EEG data, is a software package that provides comprehensive analysis tools and workflows including preprocessing, source estimation, time–frequency analysis, statistical analysis, and several methods to estimate functional connectivity between distributed brain regions. The present paper gives detailed information about the MNE package and describes typical use cases while also warning about potential caveats in analysis. The MNE package is a collaborative effort of multiple institutes striving to implement and share best methods and to facilitate distribution of analysis pipelines to advance reproducibility of research. Full documentation is available at http://martinos.org/mne. PMID:24161808

  13. The comparative analysis of the current-meter method and the pressure-time method used for discharge measurements in the Kaplan turbine penstocks

    NASA Astrophysics Data System (ADS)

    Adamkowski, A.; Krzemianowski, Z.

    2012-11-01

    The paper presents experiences gathered during many years of utilizing the current-meter and pressure-time methods for flow rate measurements in many hydropower plants. The integration techniques used in these both methods are different from the recommendations contained in the relevant international standards, mainly from the graphical and arithmetical ones. The results of the comparative analysis of both methods applied at the same time during the hydraulic performance tests of two Kaplan turbines in one of the Polish hydropower plant are presented in the final part of the paper. In the case of the pressure-time method application, the concrete penstocks of the tested turbines required installing a special measuring instrumentation inside the penstock. The comparison has shown a satisfactory agreement between the results of discharge measurements executed using the both considered methods. Maximum differences between the discharge values have not exceeded 1.0 % and the average differences have not been greater than 0.5 %.

  14. Relevant Feature Set Estimation with a Knock-out Strategy and Random Forests

    PubMed Central

    Ganz, Melanie; Greve, Douglas N.; Fischl, Bruce; Konukoglu, Ender

    2015-01-01

    Group analysis of neuroimaging data is a vital tool for identifying anatomical and functional variations related to diseases as well as normal biological processes. The analyses are often performed on a large number of highly correlated measurements using a relatively smaller number of samples. Despite the correlation structure, the most widely used approach is to analyze the data using univariate methods followed by post-hoc corrections that try to account for the data’s multivariate nature. Although widely used, this approach may fail to recover from the adverse effects of the initial analysis when local effects are not strong. Multivariate pattern analysis (MVPA) is a powerful alternative to the univariate approach for identifying relevant variations. Jointly analyzing all the measures, MVPA techniques can detect global effects even when individual local effects are too weak to detect with univariate analysis. Current approaches are successful in identifying variations that yield highly predictive and compact models. However, they suffer from lessened sensitivity and instabilities in identification of relevant variations. Furthermore, current methods’ user-defined parameters are often unintuitive and difficult to determine. In this article, we propose a novel MVPA method for group analysis of high-dimensional data that overcomes the drawbacks of the current techniques. Our approach explicitly aims to identify all relevant variations using a “knock-out” strategy and the Random Forest algorithm. In evaluations with synthetic datasets the proposed method achieved substantially higher sensitivity and accuracy than the state-of-the-art MVPA methods, and outperformed the univariate approach when the effect size is low. In experiments with real datasets the proposed method identified regions beyond the univariate approach, while other MVPA methods failed to replicate the univariate results. More importantly, in a reproducibility study with the well-known ADNI dataset the proposed method yielded higher stability and power than the univariate approach. PMID:26272728

  15. Comparative policy analysis for alcohol and drugs: Current state of the field.

    PubMed

    Ritter, Alison; Livingston, Michael; Chalmers, Jenny; Berends, Lynda; Reuter, Peter

    2016-05-01

    A central policy research question concerns the extent to which specific policies produce certain effects - and cross-national (or between state/province) comparisons appear to be an ideal way to answer such a question. This paper explores the current state of comparative policy analysis (CPA) with respect to alcohol and drugs policies. We created a database of journal articles published between 2010 and 2014 as the body of CPA work for analysis. We used this database of 57 articles to clarify, extract and analyse the ways in which CPA has been defined. Quantitative and qualitative analysis of the CPA methods employed, the policy areas that have been studied, and differences between alcohol CPA and drug CPA are explored. There is a lack of clear definition as to what counts as a CPA. The two criteria for a CPA (explicit study of a policy, and comparison across two or more geographic locations), exclude descriptive epidemiology and single state comparisons. With the strict definition, most CPAs were with reference to alcohol (42%), although the most common policy to be analysed was medical cannabis (23%). The vast majority of papers undertook quantitative data analysis, with a variety of advanced statistical methods. We identified five approaches to the policy specification: classification or categorical coding of policy as present or absent; the use of an index; implied policy differences; described policy difference and data-driven policy coding. Each of these has limitations, but perhaps the most common limitation was the inability for the method to account for the differences between policy-as-stated versus policy-as-implemented. There is significant diversity in CPA methods for analysis of alcohol and drugs policy, and some substantial challenges with the currently employed methods. The absence of clear boundaries to a definition of what counts as a 'comparative policy analysis' may account for the methodological plurality but also appears to stand in the way of advancing the techniques. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Controlling electrode gap during vacuum arc remelting at low melting current

    DOEpatents

    Williamson, Rodney L.; Zanner, Frank J.; Grose, Stephen M.

    1997-01-01

    An apparatus and method for controlling electrode gap in a vacuum arc remelting furnace, particularly at low melting currents. Spectrographic analysis is performed of the metal vapor plasma, from which estimates of electrode gap are derived.

  17. Simulation of realistic abnormal SPECT brain perfusion images: application in semi-quantitative analysis

    NASA Astrophysics Data System (ADS)

    Ward, T.; Fleming, J. S.; Hoffmann, S. M. A.; Kemp, P. M.

    2005-11-01

    Simulation is useful in the validation of functional image analysis methods, particularly when considering the number of analysis techniques currently available lacking thorough validation. Problems exist with current simulation methods due to long run times or unrealistic results making it problematic to generate complete datasets. A method is presented for simulating known abnormalities within normal brain SPECT images using a measured point spread function (PSF), and incorporating a stereotactic atlas of the brain for anatomical positioning. This allows for the simulation of realistic images through the use of prior information regarding disease progression. SPECT images of cerebral perfusion have been generated consisting of a control database and a group of simulated abnormal subjects that are to be used in a UK audit of analysis methods. The abnormality is defined in the stereotactic space, then transformed to the individual subject space, convolved with a measured PSF and removed from the normal subject image. The dataset was analysed using SPM99 (Wellcome Department of Imaging Neuroscience, University College, London) and the MarsBaR volume of interest (VOI) analysis toolbox. The results were evaluated by comparison with the known ground truth. The analysis showed improvement when using a smoothing kernel equal to system resolution over the slightly larger kernel used routinely. Significant correlation was found between effective volume of a simulated abnormality and the detected size using SPM99. Improvements in VOI analysis sensitivity were found when using the region median over the region mean. The method and dataset provide an efficient methodology for use in the comparison and cross validation of semi-quantitative analysis methods in brain SPECT, and allow the optimization of analysis parameters.

  18. Numerical approach for ECT by using boundary element method with Laplace transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enokizono, M.; Todaka, T.; Shibao, K.

    1997-03-01

    This paper presents an inverse analysis by using BEM with Laplace transform. The method is applied to a simple problem in the eddy current testing (ECT). Some crack shapes in a conductive specimen are estimated from distributions of the transient eddy current on its sensing surface and magnetic flux density in the liftoff space. Because the transient behavior includes information on various frequency components, the method is applicable to the shape estimation of a comparative small crack.

  19. Identification of functional modules that correlate with phenotypic difference: the influence of network topology

    PubMed Central

    2010-01-01

    One of the important challenges to post-genomic biology is relating observed phenotypic alterations to the underlying collective alterations in genes. Current inferential methods, however, invariably omit large bodies of information on the relationships between genes. We present a method that takes account of such information - expressed in terms of the topology of a correlation network - and we apply the method in the context of current procedures for gene set enrichment analysis. PMID:20187943

  20. Recent developments of the NESSUS probabilistic structural analysis computer program

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.

    1992-01-01

    The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.

  1. General methodology: Costing, budgeting, and techniques for benefit-cost and cost-effectiveness analysis

    NASA Technical Reports Server (NTRS)

    Stretchberry, D. M.; Hein, G. F.

    1972-01-01

    The general concepts of costing, budgeting, and benefit-cost ratio and cost-effectiveness analysis are discussed. The three common methods of costing are presented. Budgeting distributions are discussed. The use of discounting procedures is outlined. The benefit-cost ratio and cost-effectiveness analysis is defined and their current application to NASA planning is pointed out. Specific practices and techniques are discussed, and actual costing and budgeting procedures are outlined. The recommended method of calculating benefit-cost ratios is described. A standardized method of cost-effectiveness analysis and long-range planning are also discussed.

  2. Mode-Stirred Method Implementation for HIRF Susceptibility Testing and Results Comparison with Anechoic Method

    NASA Technical Reports Server (NTRS)

    Nguyen, Truong X.; Ely, Jay J.; Koppen, Sandra V.

    2001-01-01

    This paper describes the implementation of mode-stirred method for susceptibility testing according to the current DO-160D standard. Test results on an Engine Data Processor using the implemented procedure and the comparisons with the standard anechoic test results are presented. The comparison experimentally shows that the susceptibility thresholds found in mode-stirred method are consistently higher than anechoic. This is consistent with the recent statistical analysis finding by NIST that the current calibration procedure overstates field strength by a fixed amount. Once the test results are adjusted for this value, the comparisons with the anechoic results are excellent. The results also show that test method has excellent chamber to chamber repeatability. Several areas for improvements to the current procedure are also identified and implemented.

  3. Fast determination of the current loss mechanisms in textured crystalline Si-based solar cells

    NASA Astrophysics Data System (ADS)

    Nakane, Akihiro; Fujimoto, Shohei; Fujiwara, Hiroyuki

    2017-11-01

    A quite general device analysis method that allows the direct evaluation of optical and recombination losses in crystalline silicon (c-Si)-based solar cells has been developed. By applying this technique, the current loss mechanisms of the state-of-the-art solar cells with ˜20% efficiencies have been revealed. In the established method, the optical and electrical losses are characterized from the analysis of an experimental external quantum efficiency (EQE) spectrum with very low computational cost. In particular, we have performed the EQE analyses of textured c-Si solar cells by employing the experimental reflectance spectra obtained directly from the actual devices while using flat optical models without any fitting parameters. We find that the developed method provides almost perfect fitting to EQE spectra reported for various textured c-Si solar cells, including c-Si heterojunction solar cells, a dopant-free c-Si solar cell with a MoOx layer, and an n-type passivated emitter with rear locally diffused solar cell. The modeling of the recombination loss further allows the extraction of the minority carrier diffusion length and surface recombination velocity from the EQE analysis. Based on the EQE analysis results, the current loss mechanisms in different types of c-Si solar cells are discussed.

  4. Assessment of the transportation route of oversize and excessive loads in relation to the load-bearing capacity of existing bridges

    NASA Astrophysics Data System (ADS)

    Doležel, Jiří; Novák, Drahomír; Petrů, Jan

    2017-09-01

    Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.

  5. Stirling Analysis Comparison of Commercial vs. High-Order Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2007-01-01

    Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/ proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's Compact scheme and Dyson s Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model although sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.

  6. Stirling Analysis Comparison of Commercial Versus High-Order Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2005-01-01

    Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's compact scheme and Dyson's Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model with sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.

  7. Mechanistic flexible pavement overlay design program : tech summary.

    DOT National Transportation Integrated Search

    2009-07-01

    The Louisiana Department of Transportation and Development (LADOTD) currently follows the 1993 : AASHTO pavement design guides component analysis method in its fl exible pavement overlay thickness : design. Such an overlay design method, how...

  8. Analytical procedures for water-soluble vitamins in foods and dietary supplements: a review.

    PubMed

    Blake, Christopher J

    2007-09-01

    Water-soluble vitamins include the B-group vitamins and vitamin C. In order to correctly monitor water-soluble vitamin content in fortified foods for compliance monitoring as well as to establish accurate data banks, an accurate and precise analytical method is a prerequisite. For many years microbiological assays have been used for analysis of B vitamins. However they are no longer considered to be the gold standard in vitamins analysis as many studies have shown up their deficiencies. This review describes the current status of analytical methods, including microbiological assays and spectrophotometric, biosensor and chromatographic techniques. In particular it describes the current status of the official methods and highlights some new developments in chromatographic procedures and detection methods. An overview is made of multivitamin extractions and analyses for foods and supplements.

  9. Simplified methods for evaluating road prism stability

    Treesearch

    William J. Elliot; Mark Ballerini; David Hall

    2003-01-01

    Mass failure is one of the most common failures of low-volume roads in mountainous terrain. Current methods for evaluating stability of these roads require a geotechnical specialist. A stability analysis program, XSTABL, was used to estimate the stability of 3,696 combinations of road geometry, soil, and groundwater conditions. A sensitivity analysis was carried out to...

  10. Electric Fuel Pump Condition Monitor System Using Electricalsignature Analysis

    DOEpatents

    Haynes, Howard D [Knoxville, TN; Cox, Daryl F [Knoxville, TN; Welch, Donald E [Oak Ridge, TN

    2005-09-13

    A pump diagnostic system and method comprising current sensing probes clamped on electrical motor leads of a pump for sensing only current signals on incoming motor power, a signal processor having a means for buffering and anti-aliasing current signals into a pump motor current signal, and a computer having a means for analyzing, displaying, and reporting motor current signatures from the motor current signal to determine pump health using integrated motor and pump diagnostic parameters.

  11. A Review on the Nonlinear Dynamical System Analysis of Electrocardiogram Signal

    PubMed Central

    Mohapatra, Biswajit

    2018-01-01

    Electrocardiogram (ECG) signal analysis has received special attention of the researchers in the recent past because of its ability to divulge crucial information about the electrophysiology of the heart and the autonomic nervous system activity in a noninvasive manner. Analysis of the ECG signals has been explored using both linear and nonlinear methods. However, the nonlinear methods of ECG signal analysis are gaining popularity because of their robustness in feature extraction and classification. The current study presents a review of the nonlinear signal analysis methods, namely, reconstructed phase space analysis, Lyapunov exponents, correlation dimension, detrended fluctuation analysis (DFA), recurrence plot, Poincaré plot, approximate entropy, and sample entropy along with their recent applications in the ECG signal analysis. PMID:29854361

  12. A Review on the Nonlinear Dynamical System Analysis of Electrocardiogram Signal.

    PubMed

    Nayak, Suraj K; Bit, Arindam; Dey, Anilesh; Mohapatra, Biswajit; Pal, Kunal

    2018-01-01

    Electrocardiogram (ECG) signal analysis has received special attention of the researchers in the recent past because of its ability to divulge crucial information about the electrophysiology of the heart and the autonomic nervous system activity in a noninvasive manner. Analysis of the ECG signals has been explored using both linear and nonlinear methods. However, the nonlinear methods of ECG signal analysis are gaining popularity because of their robustness in feature extraction and classification. The current study presents a review of the nonlinear signal analysis methods, namely, reconstructed phase space analysis, Lyapunov exponents, correlation dimension, detrended fluctuation analysis (DFA), recurrence plot, Poincaré plot, approximate entropy, and sample entropy along with their recent applications in the ECG signal analysis.

  13. Rapid structural analysis of nanomaterials in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Ryuzaki, Sou; Tsutsui, Makusu; He, Yuhui; Yokota, Kazumichi; Arima, Akihide; Morikawa, Takanori; Taniguchi, Masateru; Kawai, Tomoji

    2017-04-01

    Rapid structural analysis of nanoscale matter in a liquid environment represents innovative technologies that reveal the identities and functions of biologically important molecules. However, there is currently no method with high spatio-temporal resolution that can scan individual particles in solutions to gain structural information. Here we report the development of a nanopore platform realizing quantitative structural analysis for suspended nanomaterials in solutions with a high z-axis and xy-plane spatial resolution of 35.8 ± 1.1 and 12 nm, respectively. We used a low thickness-to-diameter aspect ratio pore architecture for achieving cross sectional areas of analyte (i.e. tomograms). Combining this with multiphysics simulation methods to translate ionic current data into tomograms, we demonstrated rapid structural analysis of single polystyrene (Pst) beads and single dumbbell-like Pst beads in aqueous solutions.

  14. Using a Knowledge Representations Approach to Cognitive Task Analysis.

    ERIC Educational Resources Information Center

    Black, John B.; And Others

    Task analyses have traditionally been framed in terms of overt behaviors performed in accomplishing tasks and goals. Pioneering work at the Learning Research and Development Center looked at what contribution a cognitive analysis might make to current task analysis procedures, since traditional task analysis methods neither elicit nor capture…

  15. The Application of Social Network Analysis to Team Sports

    ERIC Educational Resources Information Center

    Lusher, Dean; Robins, Garry; Kremer, Peter

    2010-01-01

    This article reviews how current social network analysis might be used to investigate individual and group behavior in sporting teams. Social network analysis methods permit researchers to explore social relations between team members and their individual-level qualities simultaneously. As such, social network analysis can be seen as augmenting…

  16. Decision Support Methods and Tools

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Alexandrov, Natalia M.; Brown, Sherilyn A.; Cerro, Jeffrey A.; Gumbert, Clyde r.; Sorokach, Michael R.; Burg, Cecile M.

    2006-01-01

    This paper is one of a set of papers, developed simultaneously and presented within a single conference session, that are intended to highlight systems analysis and design capabilities within the Systems Analysis and Concepts Directorate (SACD) of the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC). This paper focuses on the specific capabilities of uncertainty/risk analysis, quantification, propagation, decomposition, and management, robust/reliability design methods, and extensions of these capabilities into decision analysis methods within SACD. These disciplines are discussed together herein under the name of Decision Support Methods and Tools. Several examples are discussed which highlight the application of these methods within current or recent aerospace research at the NASA LaRC. Where applicable, commercially available, or government developed software tools are also discussed

  17. Current trends in endotoxin detection and analysis of endotoxin-protein interactions.

    PubMed

    Dullah, Elvina Clarie; Ongkudon, Clarence M

    2017-03-01

    Endotoxin is a type of pyrogen that can be found in Gram-negative bacteria. Endotoxin can form a stable interaction with other biomolecules thus making its removal difficult especially during the production of biopharmaceutical drugs. The prevention of endotoxins from contaminating biopharmaceutical products is paramount as endotoxin contamination, even in small quantities, can result in fever, inflammation, sepsis, tissue damage and even lead to death. Highly sensitive and accurate detection of endotoxins are keys in the development of biopharmaceutical products derived from Gram-negative bacteria. It will facilitate the study of the intermolecular interaction of an endotoxin with other biomolecules, hence the selection of appropriate endotoxin removal strategies. Currently, most researchers rely on the conventional LAL-based endotoxin detection method. However, new methods have been and are being developed to overcome the problems associated with the LAL-based method. This review paper highlights the current research trends in endotoxin detection from conventional methods to newly developed biosensors. Additionally, it also provides an overview of the use of electron microscopy, dynamic light scattering (DLS), fluorescence resonance energy transfer (FRET) and docking programs in the endotoxin-protein analysis.

  18. Local structure studies of materials using pair distribution function analysis

    NASA Astrophysics Data System (ADS)

    Peterson, Joseph W.

    A collection of pair distribution function studies on various materials is presented in this dissertation. In each case, local structure information of interest pushes the current limits of what these studies can accomplish. The goal is to provide insight into the individual material behaviors as well as to investigate ways to expand the current limits of PDF analysis. Where possible, I provide a framework for how PDF analysis might be applied to a wider set of material phenomena. Throughout the dissertation, I discuss 0 the capabilities of the PDF method to provide information pertaining to a material's structure and properties, ii) current limitations in the conventional approach to PDF analysis, iii) possible solutions to overcome certain limitations in PDF analysis, and iv) suggestions for future work to expand and improve the capabilities PDF analysis.

  19. Batch mode grid generation: An endangered species

    NASA Technical Reports Server (NTRS)

    Schuster, David M.

    1992-01-01

    Non-interactive grid generation schemes should thrive as emphasis shifts from development of numerical analysis and design methods to application of these tools to real engineering problems. A strong case is presented for the continued development and application of non-interactive geometry modeling methods. Guidelines, strategies, and techniques for developing and implementing these tools are presented using current non-interactive grid generation methods as examples. These schemes play an important role in the development of multidisciplinary analysis methods and some of these applications are also discussed.

  20. On-line Monitoring Device for High-voltage Switch Cabinet Partial Discharge Based on Pulse Current Method

    NASA Astrophysics Data System (ADS)

    Y Tao, S.; Zhang, X. Z.; Cai, H. W.; Li, P.; Feng, Y.; Zhang, T. C.; Li, J.; Wang, W. S.; Zhang, X. K.

    2017-12-01

    The pulse current method for partial discharge detection is generally applied in type testing and other off-line tests of electrical equipment at delivery. After intensive analysis of the present situation and existing problems of partial discharge detection in switch cabinets, this paper designed the circuit principle and signal extraction method for partial discharge on-line detection based on a high-voltage presence indicating systems (VPIS), established a high voltage switch cabinet partial discharge on-line detection circuit based on the pulse current method, developed background software integrated with real-time monitoring, judging and analyzing functions, carried out a real discharge simulation test on a real-type partial discharge defect simulation platform of a 10KV switch cabinet, and verified the sensitivity and validity of the high-voltage switch cabinet partial discharge on-line monitoring device based on the pulse current method. The study presented in this paper is of great significance for switch cabinet maintenance and theoretical study on pulse current method on-line detection, and has provided a good implementation method for partial discharge on-line monitoring devices for 10KV distribution network equipment.

  1. Heading in the right direction: thermodynamics-based network analysis and pathway engineering.

    PubMed

    Ataman, Meric; Hatzimanikatis, Vassily

    2015-12-01

    Thermodynamics-based network analysis through the introduction of thermodynamic constraints in metabolic models allows a deeper analysis of metabolism and guides pathway engineering. The number and the areas of applications of thermodynamics-based network analysis methods have been increasing in the last ten years. We review recent applications of these methods and we identify the areas that such analysis can contribute significantly, and the needs for future developments. We find that organisms with multiple compartments and extremophiles present challenges for modeling and thermodynamics-based flux analysis. The evolution of current and new methods must also address the issues of the multiple alternatives in flux directionalities and the uncertainties and partial information from analytical methods. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. The morphology of flare phenomena, magnetic fields, and electric currents in active regions. I - Introduction and methods

    NASA Technical Reports Server (NTRS)

    Canfield, Richard C.; De La Beaujardiere, J.-F.; Fan, Yuhong; Leka, K. D.; Mcclymont, A. N.; Metcalf, Thomas R.; Mickey, Donald L.; Wuelser, Jean-Pierre; Lites, Bruce W.

    1993-01-01

    Electric current systems in solar active regions and their spatial relationship to sites of electron precipitation and high-pressure in flares were studied with the purpose of providing observational evidence for or against the flare models commonly discussed in the literature. The paper describes the instrumentation, the data used, and the data analysis methods, as well as improvements made upon earlier studies. Several flare models are overviewed, and the predictions yielded by each model for the relationships of flares to the vertical current systems are discussed.

  3. Inter-Disciplinary Collaboration in Support of the Post-Standby TREAT Mission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark; Baker, Benjamin; Ortensi, Javier

    Although analysis methods have advanced significantly in the last two decades, high fidelity multi- physics methods for reactors systems have been under development for only a few years and are not presently mature nor deployed. Furthermore, very few methods provide the ability to simulate rapid transients in three dimensions. Data for validation of advanced time-dependent multi- physics is sparse; at TREAT, historical data were not collected for the purpose of validating three-dimensional methods, let alone multi-physics simulations. Existing data continues to be collected to attempt to simulate the behavior of experiments and calibration transients, but it will be insufficient formore » the complete validation of analysis methods used for TREAT transient simulations. Hence, a 2018 restart will most likely occur without the direct application of advanced modeling and simulation methods. At present, the current INL modeling and simulation team plans to work with TREAT operations staff in performing reactor simulations with MAMMOTH, in parallel with the software packages currently being used in preparation for core restart (e.g., MCNP5, RELAP5, ABAQUS). The TREAT team has also requested specific measurements to be performed during startup testing, currently scheduled to run from February to August of 2018. These startup measurements will be crucial in validating the new analysis methods in preparation for ultimate application for TREAT operations and experiment design. This document describes the collaboration between modeling and simulation staff and restart, operations, instrumentation and experiment development teams to be able to effectively interact and achieve successful validation work during restart testing.« less

  4. Applications of FEM and BEM in two-dimensional fracture mechanics problems

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Steeve, B. E.; Swanson, G. R.

    1992-01-01

    A comparison of the finite element method (FEM) and boundary element method (BEM) for the solution of two-dimensional plane strain problems in fracture mechanics is presented in this paper. Stress intensity factors (SIF's) were calculated using both methods for elastic plates with either a single-edge crack or an inclined-edge crack. In particular, two currently available programs, ANSYS for finite element analysis and BEASY for boundary element analysis, were used.

  5. Microbial Monitoring of Common Opportunistic Pathogens by Comparing Multiple Real-time PCR Platforms for Potential Space Applications

    NASA Technical Reports Server (NTRS)

    Roman, Monserrate C.; Jones, Kathy U.; Oubre, Cherie M.; Castro, Victoria; Ott, Mark C.; Birmele, Michele; Venkateswaran, Kasthuri J.; Vaishampayan, Parag A.

    2013-01-01

    Current methods for microbial detection: a) Labor & time intensive cultivation-based approaches that can fail to detect or characterize all cells present. b) Requires collection of samples on orbit and transportation back to ground for analysis. Disadvantages to current detection methods: a) Unable to perform quick and reliable detection on orbit. b) Lengthy sampling intervals. c) No microbe identification.

  6. Direct mapping of local redox current density on a monolith electrode by laser scanning.

    PubMed

    Lee, Seung-Woo; Lopez, Jeffrey; Saraf, Ravi F

    2013-09-15

    An optical method of mapping local redox reaction over a monolith electrode using simple laser scanning is described. As the optical signal is linearly proportional to the maximum redox current that is measured concomitantly by voltammetry, the optical signal quantitatively maps the local redox current density distribution. The method is demonstrated on two types of reactions: (1) a reversible reaction where the redox moieties are ionic, and (2) an irreversible reaction on two different types of enzymes immobilized on the electrode where the reaction moieties are nonionic. To demonstrate the scanning capability, the local redox behavior on a "V-shaped" electrode is studied where the local length scale and, hence, the local current density, is nonuniform. The ability to measure the current density distribution by this method will pave the way for multianalyte analysis on a monolith electrode using a standard three-electrode configuration. The method is called Scanning Electrometer for Electrical Double-layer (SEED). Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Operations planning and analysis handbook for NASA/MSFC phase B development projects

    NASA Technical Reports Server (NTRS)

    Batson, Robert C.

    1986-01-01

    Current operations planning and analysis practices on NASA/MSFC Phase B projects were investigated with the objectives of (1) formalizing these practices into a handbook and (2) suggesting improvements. The study focused on how Science and Engineering (S&E) Operational Personnel support Program Development (PD) Task Teams. The intimate relationship between systems engineering and operations analysis was examined. Methods identified for use by operations analysts during Phase B include functional analysis, interface analysis methods to calculate/allocate such criteria as reliability, Maintainability, and operations and support cost.

  8. Fracture analysis of a transversely isotropic high temperature superconductor strip based on real fundamental solutions

    NASA Astrophysics Data System (ADS)

    Gao, Zhiwen; Zhou, Youhe

    2015-04-01

    Real fundamental solution for fracture problem of transversely isotropic high temperature superconductor (HTS) strip is obtained. The superconductor E-J constitutive law is characterized by the Bean model where the critical current density is independent of the flux density. Fracture analysis is performed by the methods of singular integral equations which are solved numerically by Gauss-Lobatto-Chybeshev (GSL) collocation method. To guarantee a satisfactory accuracy, the convergence behavior of the kernel function is investigated. Numerical results of fracture parameters are obtained and the effects of the geometric characteristics, applied magnetic field and critical current density on the stress intensity factors (SIF) are discussed.

  9. Method and apparatus for generating motor current spectra to enhance motor system fault detection

    DOEpatents

    Linehan, Daniel J.; Bunch, Stanley L.; Lyster, Carl T.

    1995-01-01

    A method and circuitry for sampling periodic amplitude modulations in a nonstationary periodic carrier wave to determine frequencies in the amplitude modulations. The method and circuit are described in terms of an improved motor current signature analysis. The method insures that the sampled data set contains an exact whole number of carrier wave cycles by defining the rate at which samples of motor current data are collected. The circuitry insures that a sampled data set containing stationary carrier waves is recreated from the analog motor current signal containing nonstationary carrier waves by conditioning the actual sampling rate to adjust with the frequency variations in the carrier wave. After the sampled data is transformed to the frequency domain via the Discrete Fourier Transform, the frequency distribution in the discrete spectra of those components due to the carrier wave and its harmonics will be minimized so that signals of interest are more easily analyzed.

  10. Quantitative Analysis of Homogeneous Electrocatalytic Reactions at IDA Electrodes: The Example of [Ni(PPh2NBn2)2]2+

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Fei; Parkinson, B. A.; Divan, Ralu

    Interdigitated array (IDA) electrodes have been applied to study the EC’ (electron transfer reaction followed by a catalytic reaction) reactions and a new method of quantitative analysis of IDA results was developed. In this new method, currents on IDA generator and collector electrodes for an EC’ mechanism are derived from the number of redox cycles and the contribution of non-catalytic current. And the fractions of bipotential recycling species and catalytic-active species are calculated, which helps understanding the catalytic reaction mechanism. The homogeneous hydrogen evolution reaction catalyzed by [Ni(PPh2NBn2)2]2+ (where PPh2NBn2 is 1,5-dibenzyl-3,7-diphenyl-1,5-diaza-3,7-diphosphacyclooctane) electrocatalyst was examined and analyzed with IDA electrodes.more » Besides, the existence of reaction intermediates in the catalytic cycle is inferred from the electrochemical behavior of a glassy carbon disk electrodes and carbon IDA electrodes. This quantitative analysis of IDA electrode cyclic voltammetry currents can be used as a simple and straightforward method for determining reaction mechanism in other catalytic systems as well.« less

  11. Controlling electrode gap during vacuum arc remelting at low melting current

    DOEpatents

    Williamson, R.L.; Zanner, F.J.; Grose, S.M.

    1997-04-15

    An apparatus and method are disclosed for controlling electrode gap in a vacuum arc remelting furnace, particularly at low melting currents. Spectrographic analysis is performed of the metal vapor plasma, from which estimates of electrode gap are derived. 5 figs.

  12. HOLDING TIME STUDY FOR FECALS/SALMONELLA & CONNECTING LANGUAGE FOR 503 REGULATIONS

    EPA Science Inventory

    Current federal regulations required monitoring for fecal coliforms or Salmonella in biosolids destined for land application. Methods used for analysis of fecal coliforms and Salmonella have been developed and are currently in use for quantification of these organisms. Recently c...

  13. Alternative methods for safety analysis and intervention for contracting commercial vehicles and drivers in Connecticut

    DOT National Transportation Integrated Search

    2012-06-01

    This study evaluated Connecticuts current system for qualifying contractors for : the use of commercial vehicles on state contracts, identifies its impacts, and : makes recommendations on how the state should revise the current system. The : prima...

  14. Analysis of Proportional Integral and Optimized Proportional Integral Controllers for Resistance Spot Welding System (RSWS) - A Performance Perspective

    NASA Astrophysics Data System (ADS)

    Rama Subbanna, S.; Suryakalavathi, M., Dr.

    2017-08-01

    This paper is an attempt to accomplish a performance analysis of the different control techniques on spikes reduction method applied on the medium frequency transformer based DC spot welding system. Spike reduction is an important factor to be considered while spot welding systems are concerned. During normal RSWS operation welding transformer’s magnetic core can become saturated due to the unbalanced resistances of both transformer secondary windings and different characteristics of output rectifier diodes, which causes current spikes and over-current protection switch-off of the entire system. The current control technique is a piecewise linear control technique that is inspired from the DC-DC converter control algorithms to register a novel spike reduction method in the MFDC spot welding applications. Two controllers that were used for the spike reduction portion of the overall applications involve the traditional PI controller and Optimized PI controller. Care is taken such that the current control technique would maintain a reduced spikes in the primary current of the transformer while it reduces the Total Harmonic Distortion. The performance parameter that is involved in the spikes reduction technique is the THD, Percentage of current spike reduction for both techniques. Matlab/SimulinkTM based simulation is carried out for the MFDC RSWS with KW and results are tabulated for the PI and Optimized PI controllers and a tradeoff analysis is carried out.

  15. A Two-State Study of Family Child Care Engagement in Quality Rating and Improvement Systems: A Mixed-Methods Analysis

    ERIC Educational Resources Information Center

    Hallam, Rena; Hooper, Alison; Bargreen, Kaitlin; Buell, Martha; Han, Myae

    2017-01-01

    Research Findings: The current study is a mixed-methods investigation of family child care provider participation in voluntary Quality Rating and Improvement Systems (QRIS) in 2 states. Study 1 is an analysis of matched QRIS and child care licensing administrative data extracted from both states in May, 2014. Poverty and population density…

  16. Comparative analysis of the current payment system for hospital services in Serbia and projected payments under diagnostic related groups system in urology.

    PubMed

    Babić, Uroš; Soldatović, Ivan; Vuković, Dejana; Milićević, Milena Šantrić; Stjepanović, Mihailo; Kojić, Dejan; Argirović, Aleksandar; Vukotić, Vinka

    2015-03-01

    Global budget per calendar year is a traditional method of funding hospitals in Serbia. Diagnose related groups (DGR) is a method of hospital payment based on classification of patients into groups with clinically similar problems and similar utilization of hospital resources. The aim of this study was to compare current methods of hospital services payment with the projected costs by DRG payment method in urology. The data were obtained from the information system used in the Clinical Hospital Center "Dr. Dragiša Mišović"--Dedinje in Belgrade, Serbia. The implemented hospital information system was the main criterion for selection of healthcare institutions. The study included 994 randomly selected patients treated surgically and conservatively in 2012. Average costs under the current payment method were slightly higher than those projected by DRG, however, the variability was twice as high (54,111 ± 69,789 compared to 53,434 ± 32,509, p < 0.001) respectively. The univariate analysis showed that the highest correlation with the current payment method as well as with the projected one by DRG was observed in relation to the number of days of hospitalization (ρ = 0.842, p < 0.001, and ρ = 0.637, p < 0.001, respectively). Multivariate regression models confirmed the influence of the number of hospitalization days to costs under the current payment system (β = 0.843, p < 0.001) as well as under the projected DRG payment system (β = 0.737, p < 0.001). The same predictor was crucial for the difference in the current payment method and the pro- jected DRG payment methods (β = 0.501, p < 0.001). Payment under the DRG system is administratively more complex because it requires detailed and standardized coding of diagnoses and procedures, as well as the information on the average consumption of resources (costs) per DRG. Given that aggregate costs of treatment under two hospital payment methods compared in the study are not significantly different, the focus on minor surgeries both under the current hospital payment method and under the introduced DRG system would be far more cost-effective for a hospital as great variations in treatment performance (reductions of days of hospitalization and complications), and consequently invoiced amounts would be reduced.

  17. Large scale analysis of the mutational landscape in HT-SELEX improves aptamer discovery

    PubMed Central

    Hoinka, Jan; Berezhnoy, Alexey; Dao, Phuong; Sauna, Zuben E.; Gilboa, Eli; Przytycka, Teresa M.

    2015-01-01

    High-Throughput (HT) SELEX combines SELEX (Systematic Evolution of Ligands by EXponential Enrichment), a method for aptamer discovery, with massively parallel sequencing technologies. This emerging technology provides data for a global analysis of the selection process and for simultaneous discovery of a large number of candidates but currently lacks dedicated computational approaches for their analysis. To close this gap, we developed novel in-silico methods to analyze HT-SELEX data and utilized them to study the emergence of polymerase errors during HT-SELEX. Rather than considering these errors as a nuisance, we demonstrated their utility for guiding aptamer discovery. Our approach builds on two main advancements in aptamer analysis: AptaMut—a novel technique allowing for the identification of polymerase errors conferring an improved binding affinity relative to the ‘parent’ sequence and AptaCluster—an aptamer clustering algorithm which is to our best knowledge, the only currently available tool capable of efficiently clustering entire aptamer pools. We applied these methods to an HT-SELEX experiment developing aptamers against Interleukin 10 receptor alpha chain (IL-10RA) and experimentally confirmed our predictions thus validating our computational methods. PMID:25870409

  18. Multivariate analysis and extraction of parameters in resistive RAMs using the Quantum Point Contact model

    NASA Astrophysics Data System (ADS)

    Roldán, J. B.; Miranda, E.; González-Cordero, G.; García-Fernández, P.; Romero-Zaliz, R.; González-Rodelas, P.; Aguilera, A. M.; González, M. B.; Jiménez-Molinos, F.

    2018-01-01

    A multivariate analysis of the parameters that characterize the reset process in Resistive Random Access Memory (RRAM) has been performed. The different correlations obtained can help to shed light on the current components that contribute in the Low Resistance State (LRS) of the technology considered. In addition, a screening method for the Quantum Point Contact (QPC) current component is presented. For this purpose, the second derivative of the current has been obtained using a novel numerical method which allows determining the QPC model parameters. Once the procedure is completed, a whole Resistive Switching (RS) series of thousands of curves is studied by means of a genetic algorithm. The extracted QPC parameter distributions are characterized in depth to get information about the filamentary pathways associated with LRS in the low voltage conduction regime.

  19. Trends in Mediation Analysis in Nursing Research: Improving Current Practice.

    PubMed

    Hertzog, Melody

    2018-06-01

    The purpose of this study was to describe common approaches used by nursing researchers to test mediation models and evaluate them within the context of current methodological advances. MEDLINE was used to locate studies testing a mediation model and published from 2004 to 2015 in nursing journals. Design (experimental/correlation, cross-sectional/longitudinal, model complexity) and analysis (method, inclusion of test of mediated effect, violations/discussion of assumptions, sample size/power) characteristics were coded for 456 studies. General trends were identified using descriptive statistics. Consistent with findings of reviews in other disciplines, evidence was found that nursing researchers may not be aware of the strong assumptions and serious limitations of their analyses. Suggestions for strengthening the rigor of such studies and an overview of current methods for testing more complex models, including longitudinal mediation processes, are presented.

  20. Hybrid PV/diesel solar power system design using multi-level factor analysis optimization

    NASA Astrophysics Data System (ADS)

    Drake, Joshua P.

    Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.

  1. A Short-Circuit Method for Networks.

    ERIC Educational Resources Information Center

    Ong, P. P.

    1983-01-01

    Describes a method of network analysis that allows avoidance of Kirchoff's Laws (providing the network is symmetrical) by reduction to simple series/parallel resistances. The method can be extended to symmetrical alternating current, capacitance or inductance if corresponding theorems are used. Symmetric cubic network serves as an example. (JM)

  2. Determination of the pure silicon monocarbide content of silicon carbide and products based on silicon carbide

    NASA Technical Reports Server (NTRS)

    Prost, L.; Pauillac, A.

    1978-01-01

    Experience has shown that different methods of analysis of SiC products give different results. Methods identified as AFNOR, FEPA, and manufacturer P, currently used to detect SiC, free C, free Si, free Fe, and SiO2 are reviewed. The AFNOR method gives lower SiC content, attributed to destruction of SiC by grinding. Two products sent to independent labs for analysis by the AFNOR and FEPA methods showed somewhat different results, especially for SiC, SiO2, and Al2O3 content, whereas an X-ray analysis showed a SiC content approximately 10 points lower than by chemical methods.

  3. Alternating steady state free precession for estimation of current-induced magnetic flux density: A feasibility study.

    PubMed

    Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok

    2016-05-01

    To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.

  4. Comparison between laser terahertz emission microscope and conventional methods for analysis of polycrystalline silicon solar cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakanishi, Hidetoshi, E-mail: nakanisi@screen.co.jp; Ito, Akira, E-mail: a.ito@screen.co.jp; Takayama, Kazuhisa, E-mail: takayama.k0123@gmail.com

    2015-11-15

    A laser terahertz emission microscope (LTEM) can be used for noncontact inspection to detect the waveforms of photoinduced terahertz emissions from material devices. In this study, we experimentally compared the performance of LTEM with conventional analysis methods, e.g., electroluminescence (EL), photoluminescence (PL), and laser beam induced current (LBIC), as an inspection method for solar cells. The results showed that LTEM was more sensitive to the characteristics of the depletion layer of the polycrystalline solar cell compared with EL, PL, and LBIC and that it could be used as a complementary tool to the conventional analysis methods for a solar cell.

  5. The Use of Propensity Scores in Mediation Analysis

    ERIC Educational Resources Information Center

    Jo, Booil; Stuart, Elizabeth A.; MacKinnon, David P.; Vinokur, Amiram D.

    2011-01-01

    Mediation analysis uses measures of hypothesized mediating variables to test theory for how a treatment achieves effects on outcomes and to improve subsequent treatments by identifying the most efficient treatment components. Most current mediation analysis methods rely on untested distributional and functional form assumptions for valid…

  6. Qualitative and quantitative analysis of lignocellulosic biomass using infrared techniques: A mini-review

    USDA-ARS?s Scientific Manuscript database

    Current wet chemical methods for biomass composition analysis using two-step sulfuric acid hydrolysis are time-consuming, labor-intensive, and unable to provide structural information about biomass. Infrared techniques provide fast, low-cost analysis, are non-destructive, and have shown promising re...

  7. RICH detectors: Analysis methods and their impact on physics

    NASA Astrophysics Data System (ADS)

    Križan, Peter

    2017-12-01

    The paper discusses the importance of particle identification in particle physics experiments, and reviews the impact of ring imaging Cherenkov (RICH) counters in experiments that are currently running, or are under construction. Several analysis methods are discussed that are needed to calibrate a RICH counter, and to align its components with the rest of the detector. Finally, methods are reviewed on how to employ the collected data to efficiently separate one particle species from the other.

  8. The Generation of Novel MR Imaging Techniques to Visualize Inflammatory/Degenerative Mechanisms and the Correlation of MR Data with 3D Microscopic Changes

    DTIC Science & Technology

    2013-09-01

    existing MR scanning systems providing the ability to visualize structures that are impossible with current methods . Using techniques to concurrently...and unique system for analysis of affected brain regions and coupled with other imaging techniques and molecular measurements holds significant...scanning systems providing the ability to visualize structures that are impossible with current methods . Using techniques to concurrently stain

  9. Computerized Spiral Analysis Using the iPad

    PubMed Central

    Sisti, Jonathan A.; Christophe, Brandon; Seville, Audrey Rakovich; Garton, Andrew L.A.; Gupta, Vivek P.; Bandin, Alexander J.; Yu, Qiping; Pullman, Seth L.

    2017-01-01

    Background Digital analysis of writing and drawing has become a valuable research and clinical tool for the study of upper limb motor dysfunction in patients with essential tremor, Parkinson’s disease, dystonia, and related disorders. We developed a validated method of computerized spiral analysis of hand-drawn Archimedean spirals that provides insight into movement dynamics beyond subjective visual assessment using a Wacom graphics tablet. While the Wacom tablet method provides robust data, more widely available mobile technology platforms exist. New Method We introduce a novel adaptation of the Wacom-based method for the collection of hand-drawn kinematic data using an Apple iPad. This iPad-based system is stand-alone, easy-to-use, can capture drawing data with either a finger or capacitive stylus, is precise, and potentially ubiquitous. Results The iPad-based system acquires position and time data that is fully compatible with our original spiral analysis program. All of the important indices including degree of severity, speed, presence of tremor, tremor amplitude, tremor frequency, variability of pressure, and tightness are calculated from the digital spiral data, which the application is able to transmit. Comparison with Existing Method While the iPad method is limited by current touch screen technology, it does collect data with acceptable congruence compared to the current Wacom-based method while providing the advantages of accessibility and ease of use. Conclusions The iPad is capable of capturing precise digital spiral data for analysis of motor dysfunction while also providing a convenient, easy-to-use modality in clinics and potentially at home. PMID:27840146

  10. A convenient method for X-ray analysis in TEM that measures mass thickness and composition

    NASA Astrophysics Data System (ADS)

    Statham, P.; Sagar, J.; Holland, J.; Pinard, P.; Lozano-Perez, S.

    2018-01-01

    We consider a new approach for quantitative analysis in transmission electron microscopy (TEM) that offers the same convenience as single-standard quantitative analysis in scanning electron microscopy (SEM). Instead of a bulk standard, a thin film with known mass thickness is used as a reference. The procedure involves recording an X-ray spectrum from the reference film for each session of acquisitions on real specimens. There is no need to measure the beam current; the current only needs to be stable for the duration of the session. A new reference standard with a large (1 mm x 1 mm) area of uniform thickness of 100 nm silicon nitride is used to reveal regions of X-ray detector occlusion that would give misleading results for any X-ray method that measures thickness. Unlike previous methods, the new X-ray method does not require an accurate beam current monitor but delivers equivalent accuracy in mass thickness measurement. Quantitative compositional results are also automatically corrected for specimen self-absorption. The new method is tested using a wedge specimen of Inconel 600 that is used to calibrate the high angle angular dark field (HAADF) signal to provide a thickness reference and results are compared with electron energy-loss spectrometry (EELS) measurements. For the new X-ray method, element composition results are consistent with the expected composition for the alloy and the mass thickness measurement is shown to provide an accurate alternative to EELS for thickness determination in TEM without the uncertainty associated with mean free path estimates.

  11. Evaluation of a cost-effective loads approach. [shock spectra/impedance method for Viking Orbiter

    NASA Technical Reports Server (NTRS)

    Garba, J. A.; Wada, B. K.; Bamford, R.; Trubert, M. R.

    1976-01-01

    A shock spectra/impedance method for loads predictions is used to estimate member loads for the Viking Orbiter, a 7800-lb interplanetary spacecraft that has been designed using transient loads analysis techniques. The transient loads analysis approach leads to a lightweight structure but requires complex and costly analyses. To reduce complexity and cost, a shock spectra/impedance method is currently being used to design the Mariner Jupiter Saturn spacecraft. This method has the advantage of using low-cost in-house loads analysis techniques and typically results in more conservative structural loads. The method is evaluated by comparing the increase in Viking member loads to the loads obtained by the transient loads analysis approach. An estimate of the weight penalty incurred by using this method is presented. The paper also compares the calculated flight loads from the transient loads analyses and the shock spectra/impedance method to measured flight data.

  12. A multiclass multiresidue LC-MS/MS method for analysis of veterinary drugs in bovine kidney

    USDA-ARS?s Scientific Manuscript database

    The increased efficiency permitted by multiclass, multiresidue methods has made such approaches very attractive to laboratories involved in monitoring veterinary drug residues in animal tissues. In this current work, evaluation of a multiclass multiresidue LC-MS/MS method in bovine kidney is describ...

  13. Topological data analysis as a morphometric method: using persistent homology to demarcate a leaf morphospace

    USDA-ARS?s Scientific Manuscript database

    Current morphometric methods that comprehensively measure shape cannot compare the disparate leaf shapes found in flowering plants and are sensitive to processing artifacts. Here we describe a persistent homology approach to measuring shape. Persistent homology is a topological method (concerned wit...

  14. METHOD DEVELOPMENT, EVALUATION, REFINEMENT, AND ANALYSIS FOR FIELD STUDIES

    EPA Science Inventory

    Manufacturers routinely introduce new pesticides into the marketplace and discontinue manufacturing older pesticides that may be more toxic to humans. Analytical methods and environmental data are needed for current use residential pesticides (e.g., pyrethrins, synthetic pyrethr...

  15. Impacts of potential seismic landslides on lifeline corridors.

    DOT National Transportation Integrated Search

    2015-02-01

    This report presents a fully probabilistic method for regional seismically induced landslide hazard analysis and : mapping. The method considers the most current predictions for strong ground motions and seismic sources : through use of the U.S.G.S. ...

  16. Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses

    ERIC Educational Resources Information Center

    Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo

    2018-01-01

    Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…

  17. Advanced bridge safety initiative, task 3 : slab bridge load rating using AASHTO methodology and finite element analysis - an analysis of 20 bridges.

    DOT National Transportation Integrated Search

    2011-12-01

    Current AASHTO provisions for the conventional load rating of flat slab bridges rely on the equivalent strip method : of analysis for determining live load effects, this is generally regarded as overly conservative by many professional : engineers. A...

  18. Targeted methods for quantitative analysis of protein glycosylation

    PubMed Central

    Goldman, Radoslav; Sanda, Miloslav

    2018-01-01

    Quantification of proteins by LC-MS/MS-MRM has become a standard method with broad projected clinical applicability. MRM quantification of protein modifications is, however, far less utilized, especially in the case of glycoproteins. This review summarizes current methods for quantitative analysis of protein glycosylation with a focus on MRM methods. We describe advantages of this quantitative approach, analytical parameters that need to be optimized to achieve reliable measurements, and point out the limitations. Differences between major classes of N- and O-glycopeptides are described and class-specific glycopeptide assays are demonstrated. PMID:25522218

  19. Analysis of drugs in human tissues by supercritical fluid extraction/immunoassay

    NASA Astrophysics Data System (ADS)

    Furton, Kenneth G.; Sabucedo, Alberta; Rein, Joseph; Hearn, W. L.

    1997-02-01

    A rapid, readily automated method has been developed for the quantitative analysis of phenobarbital from human liver tissues based on supercritical carbon dioxide extraction followed by fluorescence enzyme immunoassay. The method developed significantly reduces sample handling and utilizes the entire liver homogenate. The current method yields comparable recoveries and precision and does not require the use of an internal standard, although traditional GC/MS confirmation can still be performed on sample extracts. Additionally, the proposed method uses non-toxic, inexpensive carbon dioxide, thus eliminating the use of halogenated organic solvents.

  20. Application of probabilistic analysis/design methods in space programs - The approaches, the status, and the needs

    NASA Technical Reports Server (NTRS)

    Ryan, Robert S.; Townsend, John S.

    1993-01-01

    The prospective improvement of probabilistic methods for space program analysis/design entails the further development of theories, codes, and tools which match specific areas of application, the drawing of lessons from previous uses of probability and statistics data bases, the enlargement of data bases (especially in the field of structural failures), and the education of engineers and managers on the advantages of these methods. An evaluation is presently made of the current limitations of probabilistic engineering methods. Recommendations are made for specific applications.

  1. In-Depth Analysis of the JACK Model.

    DOT National Transportation Integrated Search

    2009-04-30

    Recently, as part of a comprehensive analysis of budget and funding options, a TxDOT : special task force has examined the agencys current financial forecasting methods and has : developed a model designed to estimate future State Highway Fund rev...

  2. ESEA: Discovering the Dysregulated Pathways based on Edge Set Enrichment Analysis

    PubMed Central

    Han, Junwei; Shi, Xinrui; Zhang, Yunpeng; Xu, Yanjun; Jiang, Ying; Zhang, Chunlong; Feng, Li; Yang, Haixiu; Shang, Desi; Sun, Zeguo; Su, Fei; Li, Chunquan; Li, Xia

    2015-01-01

    Pathway analyses are playing an increasingly important role in understanding biological mechanism, cellular function and disease states. Current pathway-identification methods generally focus on only the changes of gene expression levels; however, the biological relationships among genes are also the fundamental components of pathways, and the dysregulated relationships may also alter the pathway activities. We propose a powerful computational method, Edge Set Enrichment Analysis (ESEA), for the identification of dysregulated pathways. This provides a novel way of pathway analysis by investigating the changes of biological relationships of pathways in the context of gene expression data. Simulation studies illustrate the power and performance of ESEA under various simulated conditions. Using real datasets from p53 mutation, Type 2 diabetes and lung cancer, we validate effectiveness of ESEA in identifying dysregulated pathways. We further compare our results with five other pathway enrichment analysis methods. With these analyses, we show that ESEA is able to help uncover dysregulated biological pathways underlying complex traits and human diseases via specific use of the dysregulated biological relationships. We develop a freely available R-based tool of ESEA. Currently, ESEA can support pathway analysis of the seven public databases (KEGG; Reactome; Biocarta; NCI; SPIKE; HumanCyc; Panther). PMID:26267116

  3. The role of finite-difference methods in design and analysis for supersonic cruise

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.

    1976-01-01

    Finite-difference methods for analysis of steady, inviscid supersonic flows are described, and their present state of development is assessed with particular attention to their applicability to vehicles designed for efficient cruise flight. Current work is described which will allow greater geometric latitude, improve treatment of embedded shock waves, and relax the requirement that the axial velocity must be supersonic.

  4. Turning up the heat on aircraft structures. [design and analysis for high-temperature conditions

    NASA Technical Reports Server (NTRS)

    Dobyns, Alan; Saff, Charles; Johns, Robert

    1992-01-01

    An overview is presented of the current effort in design and development of aircraft structures to achieve the lowest cost for best performance. Enhancements in this area are focused on integrated design, improved design analysis tools, low-cost fabrication techniques, and more sophisticated test methods. 3D CAD/CAM data are becoming the method through which design, manufacturing, and engineering communicate.

  5. Rapid quantification of underivatized amino acids in plasma by hydrophilic interaction liquid chromatography (HILIC) coupled with tandem mass-spectrometry.

    PubMed

    Prinsen, Hubertus C M T; Schiebergen-Bronkhorst, B G M; Roeleveld, M W; Jans, J J M; de Sain-van der Velden, M G M; Visser, G; van Hasselt, P M; Verhoeven-Duif, N M

    2016-09-01

    Amino acidopathies are a class of inborn errors of metabolism (IEM) that can be diagnosed by analysis of amino acids (AA) in plasma. Current strategies for AA analysis include cation exchange HPLC with post-column ninhydrin derivatization, GC-MS, and LC-MS/MS-related methods. Major drawbacks of the current methods are time-consuming procedures, derivative problems, problems with retention, and MS-sensitivity. The use of hydrophilic interaction liquid chromatography (HILIC) columns is an ideal separation mode for hydrophilic compounds like AA. Here we report a HILIC-method for analysis of 36 underivatized AA in plasma to detect defects in AA metabolism that overcomes the major drawbacks of other methods. A rapid, sensitive, and specific method was developed for the analysis of AA in plasma without derivatization using HILIC coupled with tandem mass-spectrometry (Xevo TQ, Waters). Excellent separation of 36 AA (24 quantitative/12 qualitative) in plasma was achieved on an Acquity BEH Amide column (2.1×100 mm, 1.7 μm) in a single MS run of 18 min. Plasma of patients with a known IEM in AA metabolism was analyzed and all patients were correctly identified. The reported method analyzes 36 AA in plasma within 18 min and provides baseline separation of isomeric AA such as leucine and isoleucine. No separation was obtained for isoleucine and allo-isoleucine. The method is applicable to study defects in AA metabolism in plasma.

  6. Quantification of Microbial Phenotypes

    PubMed Central

    Martínez, Verónica S.; Krömer, Jens O.

    2016-01-01

    Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694

  7. Analytical modeling and analysis of magnetic field and torque for novel axial flux eddy current couplers with PM excitation

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Dazhi; Zheng, Di; Yu, Linxin

    2017-10-01

    Rotational permanent magnet eddy current couplers are promising devices for torque and speed transmission without any mechanical contact. In this study, flux-concentration disk-type permanent magnet eddy current couplers with double conductor rotor are investigated. Given the drawback of the accurate three-dimensional finite element method, this paper proposes a mixed two-dimensional analytical modeling approach. Based on this approach, the closed-form expressions of magnetic field, eddy current, electromagnetic force and torque for such devices are obtained. Finally, a three-dimensional finite element method is employed to validate the analytical results. Besides, a prototype is manufactured and tested for the torque-speed characteristic.

  8. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.

  9. Development of an SPE/CE method for analyzing HAAs

    USGS Publications Warehouse

    Zhang, L.; Capel, P.D.; Hozalski, R.M.

    2007-01-01

    The haloacetic acid (HAA) analysis methods approved by the US Environmental Protection Agency involve extraction and derivatization of HAAs (typically to their methyl ester form) and analysis by gas chromatography (GC) with electron capture detection (ECD). Concerns associated with these methods include the time and effort of the derivatization process, use of potentially hazardous chemicals or conditions during methylation, poor recoveries because of low extraction efficiencies for some HAAs or matrix effects from sulfate, and loss of tribromoacetic acid because of decarboxylation. The HAA analysis method introduced here uses solid-phase extraction (SPE) followed by capillary electrophoresis (CE) analysis. The method is accurate, reproducible, sensitive, relatively safe, and easy to perform, and avoids the use of large amounts of solvent for liquid-liquid extraction and the potential hazards and hassles of derivatization. The cost of analyzing HAAs using this method should be lower than the currently approved methods, and utilities with a GC/ECD can perform the analysis in-house.

  10. An analysis of the temperature dependence of the gate current in complementary heterojunction field-effect transistors

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas J.; Fossum, Eric R.; Baier, Steven M.

    1992-01-01

    The temperature dependence of the gate current versus the gate voltage in complementary heterojunction field-effect transistors (CHFET's) is examined. An analysis indicates that the gate conduction is due to a combination of thermionic emission, thermionic-field emission, and conduction through a temperature-activated resistance. The thermionic-field emission is consistent with tunneling through the AlGaAs insulator. The activation energy of the resistance is consistent with the ionization energy associated with the DX center in the AlGaAs. Methods reducing the gate current are discussed.

  11. Rip current evidence by hydrodynamic simulations, bathymetric surveys and UAV observation

    NASA Astrophysics Data System (ADS)

    Benassai, Guido; Aucelli, Pietro; Budillon, Giorgio; De Stefano, Massimo; Di Luccio, Diana; Di Paola, Gianluigi; Montella, Raffaele; Mucerino, Luigi; Sica, Mario; Pennetta, Micla

    2017-09-01

    The prediction of the formation, spacing and location of rip currents is a scientific challenge that can be achieved by means of different complementary methods. In this paper the analysis of numerical and experimental data, including RPAS (remotely piloted aircraft systems) observations, allowed us to detect the presence of rip currents and rip channels at the mouth of Sele River, in the Gulf of Salerno, southern Italy. The dataset used to analyze these phenomena consisted of two different bathymetric surveys, a detailed sediment analysis and a set of high-resolution wave numerical simulations, completed with Google EarthTM images and RPAS observations. The grain size trend analysis and the numerical simulations allowed us to identify the rip current occurrence, forced by topographically constrained channels incised on the seabed, which were compared with observations.

  12. Advances in Optical Fiber-Based Faraday Rotation Diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, A D; McHale, G B; Goerz, D A

    2009-07-27

    In the past two years, we have used optical fiber-based Faraday Rotation Diagnostics (FRDs) to measure pulsed currents on several dozen capacitively driven and explosively driven pulsed power experiments. We have made simplifications to the necessary hardware for quadrature-encoded polarization analysis, including development of an all-fiber analysis scheme. We have developed a numerical model that is useful for predicting and quantifying deviations from the ideal diagnostic response. We have developed a method of analyzing quadrature-encoded FRD data that is simple to perform and offers numerous advantages over several existing methods. When comparison has been possible, we have seen good agreementmore » with our FRDs and other current sensors.« less

  13. Microarray technology for major chemical contaminants analysis in food: current status and prospects.

    PubMed

    Zhang, Zhaowei; Li, Peiwu; Hu, Xiaofeng; Zhang, Qi; Ding, Xiaoxia; Zhang, Wen

    2012-01-01

    Chemical contaminants in food have caused serious health issues in both humans and animals. Microarray technology is an advanced technique suitable for the analysis of chemical contaminates. In particular, immuno-microarray approach is one of the most promising methods for chemical contaminants analysis. The use of microarrays for the analysis of chemical contaminants is the subject of this review. Fabrication strategies and detection methods for chemical contaminants are discussed in detail. Application to the analysis of mycotoxins, biotoxins, pesticide residues, and pharmaceutical residues is also described. Finally, future challenges and opportunities are discussed.

  14. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends

    PubMed Central

    Mansoor, Awais; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z.; Folio, Les R.; Udupa, Jayaram K.; Mollura, Daniel J.

    2015-01-01

    The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy–guided, and (e) machine learning–based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed. ©RSNA, 2015 PMID:26172351

  15. A study of the limitations of linear theory methods as applied to sonic boom calculations

    NASA Technical Reports Server (NTRS)

    Darden, Christine M.

    1990-01-01

    Current sonic boom minimization theories have been reviewed to emphasize the capabilities and flexibilities of the methods. Flexibility is important because it is necessary for the designer to meet optimized area constraints while reducing the impact on vehicle aerodynamic performance. Preliminary comparisons of sonic booms predicted for two Mach 3 concepts illustrate the benefits of shaping. Finally, for very simple bodies of revolution, sonic boom predictions were made using two methods - a modified linear theory method and a nonlinear method - for signature shapes which were both farfield N-waves and midfield waves. Preliminary analysis on these simple bodies verified that current modified linear theory prediction methods become inadequate for predicting midfield signatures for Mach numbers above 3. The importance of impulse is sonic boom disturbance and the importance of three-dimensional effects which could not be simulated with the bodies of revolution will determine the validity of current modified linear theory methods in predicting midfield signatures at lower Mach numbers.

  16. Frequency-Modulated Continuous Flow Analysis Electrospray Ionization Mass Spectrometry (FM-CFA-ESI-MS) for Sample Multiplexing.

    PubMed

    Filla, Robert T; Schrell, Adrian M; Coulton, John B; Edwards, James L; Roper, Michael G

    2018-02-20

    A method for multiplexed sample analysis by mass spectrometry without the need for chemical tagging is presented. In this new method, each sample is pulsed at unique frequencies, mixed, and delivered to the mass spectrometer while maintaining a constant total flow rate. Reconstructed ion currents are then a time-dependent signal consisting of the sum of the ion currents from the various samples. Spectral deconvolution of each reconstructed ion current reveals the identity of each sample, encoded by its unique frequency, and its concentration encoded by the peak height in the frequency domain. This technique is different from other approaches that have been described, which have used modulation techniques to increase the signal-to-noise ratio of a single sample. As proof of concept of this new method, two samples containing up to 9 analytes were multiplexed. The linear dynamic range of the calibration curve was increased with extended acquisition times of the experiment and longer oscillation periods of the samples. Because of the combination of the samples, salt had little effect on the ability of this method to achieve relative quantitation. Continued development of this method is expected to allow for increased numbers of samples that can be multiplexed.

  17. Current antiviral drugs and their analysis in biological materials - Part II: Antivirals against hepatitis and HIV viruses.

    PubMed

    Nováková, Lucie; Pavlík, Jakub; Chrenková, Lucia; Martinec, Ondřej; Červený, Lukáš

    2018-01-05

    This review is a Part II of the series aiming to provide comprehensive overview of currently used antiviral drugs and to show modern approaches to their analysis. While in the Part I antivirals against herpes viruses and antivirals against respiratory viruses were addressed, this part concerns antivirals against hepatitis viruses (B and C) and human immunodeficiency virus (HIV). Many novel antivirals against hepatitis C virus (HCV) and HIV have been introduced into the clinical practice over the last decade. The recent broadening portfolio of these groups of antivirals is reflected in increasing number of developed analytical methods required to meet the needs of clinical terrain. Part II summarizes the mechanisms of action of antivirals against hepatitis B virus (HBV), HCV, and HIV, their use in clinical practice, and analytical methods for individual classes. It also provides expert opinion on state of art in the field of bioanalysis of these drugs. Analytical methods reflect novelty of these chemical structures and use by far the most current approaches, such as simple and high-throughput sample preparation and fast separation, often by means of UHPLC-MS/MS. Proper method validation based on requirements of bioanalytical guidelines is an inherent part of the developed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. AmO 2 Analysis for Analytical Method Testing and Assessment: Analysis Support for AmO 2 Production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhn, Kevin John; Bland, Galey Jean; Fulwyler, James Brent

    Americium oxide samples will be measured for various analytes to support AmO 2 production. The key analytes that are currently requested by the Am production customer at LANL include total Am content, Am isotopics, Pu assay, Pu isotopics, and trace element content including 237Np content. Multiple analytical methods will be utilized depending on the sensitivity, accuracy and precision needs of the Am matrix. Traceability to the National Institute of Standards and Technology (NIST) will be achieved, where applicable, by running NIST traceable quality control materials. This given that there are no suitable AmO 2 reference materials currently available for requestedmore » analytes. The primary objective is to demonstrate the suitability of actinide analytical chemistry methods to support AmO 2 production operations.« less

  19. Measurement of toroidal vessel eddy current during plasma disruption on J-TEXT.

    PubMed

    Liu, L J; Yu, K X; Zhang, M; Zhuang, G; Li, X; Yuan, T; Rao, B; Zhao, Q

    2016-01-01

    In this paper, we have employed a thin, printed circuit board eddy current array in order to determine the radial distribution of the azimuthal component of the eddy current density at the surface of a steel plate. The eddy current in the steel plate can be calculated by analytical methods under the simplifying assumptions that the steel plate is infinitely large and the exciting current is of uniform distribution. The measurement on the steel plate shows that this method has high spatial resolution. Then, we extended this methodology to a toroidal geometry with the objective of determining the poloidal distribution of the toroidal component of the eddy current density associated with plasma disruption in a fusion reactor called J-TEXT. The preliminary measured result is consistent with the analysis and calculation results on the J-TEXT vacuum vessel.

  20. Methods and approaches in the topology-based analysis of biological pathways

    PubMed Central

    Mitrea, Cristina; Taghavi, Zeinab; Bokanizad, Behzad; Hanoudi, Samer; Tagett, Rebecca; Donato, Michele; Voichiţa, Călin; Drăghici, Sorin

    2013-01-01

    The goal of pathway analysis is to identify the pathways significantly impacted in a given phenotype. Many current methods are based on algorithms that consider pathways as simple gene lists, dramatically under-utilizing the knowledge that such pathways are meant to capture. During the past few years, a plethora of methods claiming to incorporate various aspects of the pathway topology have been proposed. These topology-based methods, sometimes referred to as “third generation,” have the potential to better model the phenomena described by pathways. Although there is now a large variety of approaches used for this purpose, no review is currently available to offer guidance for potential users and developers. This review covers 22 such topology-based pathway analysis methods published in the last decade. We compare these methods based on: type of pathways analyzed (e.g., signaling or metabolic), input (subset of genes, all genes, fold changes, gene p-values, etc.), mathematical models, pathway scoring approaches, output (one or more pathway scores, p-values, etc.) and implementation (web-based, standalone, etc.). We identify and discuss challenges, arising both in methodology and in pathway representation, including inconsistent terminology, different data formats, lack of meaningful benchmarks, and the lack of tissue and condition specificity. PMID:24133454

  1. Parametric study of variation in cargo-airplane performance related to progression from current to spanloader designs

    NASA Technical Reports Server (NTRS)

    Toll, T. A.

    1980-01-01

    A parametric analysis was made to investigate the relationship between current cargo airplanes and possible future designs that may differ greatly in both size and configuration. The method makes use of empirical scaling laws developed from statistical studies of data from current and advanced airplanes and, in addition, accounts for payload density, effects of span distributed load, and variations in tail area ratio. The method is believed to be particularly useful for exploratory studies of design and technology options for large airplanes. The analysis predicts somewhat more favorable variations of the ratios of payload to gross weight and block fuel to payload as the airplane size is increased than has been generally understood from interpretations of the cube-square law. In terms of these same ratios, large all wing (spanloader) designs show an advantage over wing-fuselage designs.

  2. Digital fabrication of textiles: an analysis of electrical networks in 3D knitted functional fabrics

    NASA Astrophysics Data System (ADS)

    Vallett, Richard; Knittel, Chelsea; Christe, Daniel; Castaneda, Nestor; Kara, Christina D.; Mazur, Krzysztof; Liu, Dani; Kontsos, Antonios; Kim, Youngmoo; Dion, Genevieve

    2017-05-01

    Digital fabrication methods are reshaping design and manufacturing processes through the adoption of pre-production visualization and analysis tools, which help minimize waste of materials and time. Despite the increasingly widespread use of digital fabrication techniques, comparatively few of these advances have benefited the design and fabrication of textiles. The development of functional fabrics such as knitted touch sensors, antennas, capacitors, and other electronic textiles could benefit from the same advances in electrical network modeling that revolutionized the design of integrated circuits. In this paper, the efficacy of using current state-of-the-art digital fabrication tools over the more common trialand- error methods currently used in textile design is demonstrated. Gaps are then identified in the current state-of-the-art tools that must be resolved to further develop and streamline the rapidly growing field of smart textiles and devices, bringing textile production into the realm of 21st century manufacturing.

  3. Role of regression analysis and variation of rheological data in calculation of pressure drop for sludge pipelines.

    PubMed

    Farno, E; Coventry, K; Slatter, P; Eshtiaghi, N

    2018-06-15

    Sludge pumps in wastewater treatment plants are often oversized due to uncertainty in calculation of pressure drop. This issue costs millions of dollars for industry to purchase and operate the oversized pumps. Besides costs, higher electricity consumption is associated with extra CO 2 emission which creates huge environmental impacts. Calculation of pressure drop via current pipe flow theory requires model estimation of flow curve data which depends on regression analysis and also varies with natural variation of rheological data. This study investigates impact of variation of rheological data and regression analysis on variation of pressure drop calculated via current pipe flow theories. Results compare the variation of calculated pressure drop between different models and regression methods and suggest on the suitability of each method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. A technique for setting analytical thresholds in massively parallel sequencing-based forensic DNA analysis

    PubMed Central

    2017-01-01

    Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described. PMID:28542338

  5. A technique for setting analytical thresholds in massively parallel sequencing-based forensic DNA analysis.

    PubMed

    Young, Brian; King, Jonathan L; Budowle, Bruce; Armogida, Luigi

    2017-01-01

    Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described.

  6. Evaluation of a cost-effective loads approach. [for Viking Orbiter light weight structural design

    NASA Technical Reports Server (NTRS)

    Garba, J. A.; Wada, B. K.; Bamford, R.; Trubert, M. R.

    1976-01-01

    A shock spectra/impedance method for loads prediction is used to estimate member loads for the Viking Orbiter, a 7800-lb interplanetary spacecraft that has been designed using transient loads analysis techniques. The transient loads analysis approach leads to a lightweight structure but requires complex and costly analyses. To reduce complexity and cost a shock spectra/impedance method is currently being used to design the Mariner Jupiter Saturn spacecraft. This method has the advantage of using low-cost in-house loads analysis techniques and typically results in more conservative structural loads. The method is evaluated by comparing the increase in Viking member loads to the loads obtained by the transient loads analysis approach. An estimate of the weight penalty incurred by using this method is presented. The paper also compares the calculated flight loads from the transient loads analyses and the shock spectra/impedance method to measured flight data.

  7. Data and methodological problems in establishing state gasoline-conservation targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, D.L.; Walton, G.H.

    The Emergency Energy Conservation Act of 1979 gives the President the authority to set gasoline-conservation targets for states in the event of a supply shortage. This paper examines data and methodological problems associated with setting state gasoline-conservation targets. The target-setting method currently used is examined and found to have some flaws. Ways of correcting these deficiencies through the use of Box-Jenkins time-series analysis are investigated. A successful estimation of Box-Jenkins models for all states included the estimation of the magnitude of the supply shortages of 1979 in each state and a preliminary estimation of state short-run price elasticities, which weremore » found to vary about a median value of -0.16. The time-series models identified were very simple in structure and lent support to the simple consumption growth model assumed by the current target method. The authors conclude that the flaws in the current method can be remedied either by replacing the current procedures with time-series models or by using the models in conjunction with minor modifications of the current method.« less

  8. Validation of next generation sequencing technologies in comparison to current diagnostic gold standards for BRAF, EGFR and KRAS mutational analysis.

    PubMed

    McCourt, Clare M; McArt, Darragh G; Mills, Ken; Catherwood, Mark A; Maxwell, Perry; Waugh, David J; Hamilton, Peter; O'Sullivan, Joe M; Salto-Tellez, Manuel

    2013-01-01

    Next Generation Sequencing (NGS) has the potential of becoming an important tool in clinical diagnosis and therapeutic decision-making in oncology owing to its enhanced sensitivity in DNA mutation detection, fast-turnaround of samples in comparison to current gold standard methods and the potential to sequence a large number of cancer-driving genes at the one time. We aim to test the diagnostic accuracy of current NGS technology in the analysis of mutations that represent current standard-of-care, and its reliability to generate concomitant information on other key genes in human oncogenesis. Thirteen clinical samples (8 lung adenocarcinomas, 3 colon carcinomas and 2 malignant melanomas) already genotyped for EGFR, KRAS and BRAF mutations by current standard-of-care methods (Sanger Sequencing and q-PCR), were analysed for detection of mutations in the same three genes using two NGS platforms and an additional 43 genes with one of these platforms. The results were analysed using closed platform-specific proprietary bioinformatics software as well as open third party applications. Our results indicate that the existing format of the NGS technology performed well in detecting the clinically relevant mutations stated above but may not be reliable for a broader unsupervised analysis of the wider genome in its current design. Our study represents a diagnostically lead validation of the major strengths and weaknesses of this technology before consideration for diagnostic use.

  9. Distributed measurement of high electric current by means of polarimetric optical fiber sensor.

    PubMed

    Palmieri, Luca; Sarchi, Davide; Galtarossa, Andrea

    2015-05-04

    A novel distributed optical fiber sensor for spatially resolved monitoring of high direct electric current is proposed and analyzed. The sensor exploits Faraday rotation and is based on the polarization analysis of the Rayleigh backscattered light. Preliminary laboratory tests, performed on a section of electric cable for currents up to 2.5 kA, have confirmed the viability of the method.

  10. Detection of stator winding faults in induction motors using three-phase current monitoring.

    PubMed

    Sharifi, Rasool; Ebrahimi, Mohammad

    2011-01-01

    The objective of this paper is to propose a new method for the detection of inter-turn short circuits in the stator windings of induction motors. In the previous reported methods, the supply voltage unbalance was the major difficulty, and this was solved mostly based on the sequence component impedance or current which are difficult to implement. Some other methods essentially are included in the offline methods. The proposed method is based on the motor current signature analysis and utilizes three phase current spectra to overcome the mentioned problem. Simulation results indicate that under healthy conditions, the rotor slot harmonics have the same magnitude in three phase currents, while under even 1 turn (0.3%) short circuit condition they differ from each other. Although the magnitude of these harmonics depends on the level of unbalanced voltage, they have the same magnitude in three phases in these conditions. Experiments performed under various load, fault, and supply voltage conditions validate the simulation results and demonstrate the effectiveness of the proposed technique. It is shown that the detection of resistive slight short circuits, without sensitivity to supply voltage unbalance is possible. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Current distribution in a three-dimensional IC analyzed by a perturbation method. Part 1: A simple steady state theory

    NASA Technical Reports Server (NTRS)

    Edmonds, Larry D.

    1987-01-01

    The steady state current distribution in a three dimensional integrated circuit is presented. A device physics approach, based on a perturbation method rather than an equivalent lumped circuit approach, is used. The perturbation method allows the various currents to be expressed in terms of elementary solutions which are solutions to very simple boundary value problems. A Simple Steady State Theory is the subtitle because the most obvious limitation of the present version of the analysis is that all depletion region boundary surfaces are treated as equipotential surfaces. This may be an adequate approximation in some applications but it is an obvious weakness in the theory when applied to latched states. Examples that illustrate the use of these analytical methods are not given because they will be presented in detail in the future.

  12. Computational dosimetry for grounded and ungrounded human models due to contact current

    NASA Astrophysics Data System (ADS)

    Chan, Kwok Hung; Hattori, Junya; Laakso, Ilkka; Hirata, Akimasa; Taki, Masao

    2013-08-01

    This study presents the computational dosimetry of contact currents for grounded and ungrounded human models. The uncertainty of the quasi-static (QS) approximation of the in situ electric field induced in a grounded/ungrounded human body due to the contact current is first estimated. Different scenarios of cylindrical and anatomical human body models are considered, and the results are compared with the full-wave analysis. In the QS analysis, the induced field in the grounded cylindrical model is calculated by the QS finite-difference time-domain (QS-FDTD) method, and compared with the analytical solution. Because no analytical solution is available for the grounded/ungrounded anatomical human body model, the results of the QS-FDTD method are then compared with those of the conventional FDTD method. The upper frequency limit for the QS approximation in the contact current dosimetry is found to be 3 MHz, with a relative local error of less than 10%. The error increases above this frequency, which can be attributed to the neglect of the displacement current. The QS or conventional FDTD method is used for the dosimetry of induced electric field and/or specific absorption rate (SAR) for a contact current injected into the index finger of a human body model in the frequency range from 10 Hz to 100 MHz. The in situ electric fields or SAR are compared with the basic restrictions in the international guidelines/standards. The maximum electric field or the 99th percentile value of the electric fields appear not only in the fat and muscle tissues of the finger, but also around the wrist, forearm, and the upper arm. Some discrepancies are observed between the basic restrictions for the electric field and SAR and the reference levels for the contact current, especially in the extremities. These discrepancies are shown by an equation that relates the current density, tissue conductivity, and induced electric field in the finger with a cross-sectional area of 1 cm2.

  13. School Foodservice Personnel's Struggle with Using Labels to Identify Whole-Grain Foods

    ERIC Educational Resources Information Center

    Chu, Yen Li; Orsted, Mary; Marquart, Len; Reicks, Marla

    2012-01-01

    Objective: To describe how school foodservice personnel use current labeling methods to identify whole-grain products and the influence on purchasing for school meals. Methods: Focus groups explored labeling methods to identify whole-grain products and barriers to incorporating whole-grain foods in school meals. Qualitative analysis procedures and…

  14. Rationale and Use of Content-Relevant Achievement Tests for the Evaluation of Instructional Programs.

    ERIC Educational Resources Information Center

    Patalino, Marianne

    Problems in current course evaluation methods are discussed and an alternative method is described for the construction, analysis, and interpretation of a test to evaluate instructional programs. The method presented represents a different approach to the traditional overreliance on standardized achievement tests and the total scores they provide.…

  15. Method and apparatus for generating motor current spectra to enhance motor system fault detection

    DOEpatents

    Linehan, D.J.; Bunch, S.L.; Lyster, C.T.

    1995-10-24

    A method and circuitry are disclosed for sampling periodic amplitude modulations in a nonstationary periodic carrier wave to determine frequencies in the amplitude modulations. The method and circuit are described in terms of an improved motor current signature analysis. The method insures that the sampled data set contains an exact whole number of carrier wave cycles by defining the rate at which samples of motor current data are collected. The circuitry insures that a sampled data set containing stationary carrier waves is recreated from the analog motor current signal containing nonstationary carrier waves by conditioning the actual sampling rate to adjust with the frequency variations in the carrier wave. After the sampled data is transformed to the frequency domain via the Discrete Fourier Transform, the frequency distribution in the discrete spectra of those components due to the carrier wave and its harmonics will be minimized so that signals of interest are more easily analyzed. 29 figs.

  16. Analysis and Design of ITER 1 MV Core Snubber

    NASA Astrophysics Data System (ADS)

    Wang, Haitian; Li, Ge

    2012-11-01

    The core snubber, as a passive protection device, can suppress arc current and absorb stored energy in stray capacitance during the electrical breakdown in accelerating electrodes of ITER NBI. In order to design the core snubber of ITER, the control parameters of the arc peak current have been firstly analyzed by the Fink-Baker-Owren (FBO) method, which are used for designing the DIIID 100 kV snubber. The B-H curve can be derived from the measured voltage and current waveforms, and the hysteresis loss of the core snubber can be derived using the revised parallelogram method. The core snubber can be a simplified representation as an equivalent parallel resistance and inductance, which has been neglected by the FBO method. A simulation code including the parallel equivalent resistance and inductance has been set up. The simulation and experiments result in dramatically large arc shorting currents due to the parallel inductance effect. The case shows that the core snubber utilizing the FBO method gives more compact design.

  17. A Review of Issues Related to Data Acquisition and Analysis in EEG/MEG Studies.

    PubMed

    Puce, Aina; Hämäläinen, Matti S

    2017-05-31

    Electroencephalography (EEG) and magnetoencephalography (MEG) are non-invasive electrophysiological methods, which record electric potentials and magnetic fields due to electric currents in synchronously-active neurons. With MEG being more sensitive to neural activity from tangential currents and EEG being able to detect both radial and tangential sources, the two methods are complementary. Over the years, neurophysiological studies have changed considerably: high-density recordings are becoming de rigueur; there is interest in both spontaneous and evoked activity; and sophisticated artifact detection and removal methods are available. Improved head models for source estimation have also increased the precision of the current estimates, particularly for EEG and combined EEG/MEG. Because of their complementarity, more investigators are beginning to perform simultaneous EEG/MEG studies to gain more complete information about neural activity. Given the increase in methodological complexity in EEG/MEG, it is important to gather data that are of high quality and that are as artifact free as possible. Here, we discuss some issues in data acquisition and analysis of EEG and MEG data. Practical considerations for different types of EEG and MEG studies are also discussed.

  18. Quantitative Detection of Cracks in Steel Using Eddy Current Pulsed Thermography.

    PubMed

    Shi, Zhanqun; Xu, Xiaoyu; Ma, Jiaojiao; Zhen, Dong; Zhang, Hao

    2018-04-02

    Small cracks are common defects in steel and often lead to catastrophic accidents in industrial applications. Various nondestructive testing methods have been investigated for crack detection; however, most current methods focus on qualitative crack identification and image processing. In this study, eddy current pulsed thermography (ECPT) was applied for quantitative crack detection based on derivative analysis of temperature variation. The effects of the incentive parameters on the temperature variation were analyzed in the simulation study. The crack profile and position are identified in the thermal image based on the Canny edge detection algorithm. Then, one or more trajectories are determined through the crack profile in order to determine the crack boundary through its temperature distribution. The slope curve along the trajectory is obtained. Finally, quantitative analysis of the crack sizes was performed by analyzing the features of the slope curves. The experimental verification showed that the crack sizes could be quantitatively detected with errors of less than 1%. Therefore, the proposed ECPT method was demonstrated to be a feasible and effective nondestructive approach for quantitative crack detection.

  19. [Isolation and identification methods of enterobacteria group and its technological advancement].

    PubMed

    Furuta, Itaru

    2007-08-01

    In the last half-century, isolation and identification methods of enterobacteria groups have markedly improved by technological advancement. Clinical microbiology tests have changed overtime from tube methods to commercial identification kits and automated identification. Tube methods are the original method for the identification of enterobacteria groups, that is, a basically essential method to recognize bacterial fermentation and biochemical principles. In this paper, traditional tube tests are discussed, such as the utilization of carbohydrates, indole, methyl red, and citrate and urease tests. Commercial identification kits and automated instruments by computer based analysis as current methods are also discussed, and those methods provide rapidity and accuracy. Nonculture techniques of nucleic acid typing methods using PCR analysis, and immunochemical methods using monoclonal antibodies can be further developed.

  20. Reduction, analysis, and properties of electric current systems in solar active regions

    NASA Technical Reports Server (NTRS)

    Gary, G. Allen; Demoulin, Pascal

    1995-01-01

    The specific attraction and, in large part, the significance of solar magnetograms lie in the fact that they give the most important data on the electric currents and the nonpotentiality of active regions. Using the vector magnetograms from the Marshall Space Flight Center (MSFC), we employ a unique technique in the area of data analysis for resolving the 180 deg ambiguity in order to calculate the spatial structure of the vertical electric current density. The 180 deg ambiguity is resolved by applying concepts from the nonlinear multivariable optimization theory. The technique is shown to be of particular importance in very nonpotential active regions. The characterization of the vertical electric current density for a set of vector magnetograms using this method then gives the spatial scale, locations, and magnitude of these current systems. The method, which employs an intermediate parametric function which covers the magnetogram and which defines the local `preferred' direction, minimizes a specific functional of the observed transverse magnetic field. The specific functional that is successful is the integral of the square of the vertical current density. We find that the vertical electric current densities have common characteristics for the extended bipolar (beta) (gamma) (delta)-regions studied. The largest current systems have j(sub z)'s which maximizes around 30 mA/sq m and have a linear decreasing distribution to a diameter of 30 Mn.

  1. Reduction, Analysis, and Properties of Electric Current Systems in Solar Active Regions

    NASA Technical Reports Server (NTRS)

    Gary, G. Allen; Demoulin, Pascal

    1995-01-01

    The specific attraction and, in large part, the significance of solar vector magnetograms lie in the fact that they give the most important data on the electric currents and the nonpotentiality of active regions. Using the vector magnetograms from the Marshall Space Flight Center (MSFC), we employ a unique technique in the area of data analysis for resolving the 180 degree ambiguity in order to calculate the spatial structure of the vertical electric current density. The 180 degree ambiguity is resolved by applying concepts from the nonlinear multivariable optimization theory. The technique is shown to be of particular importance in very nonpotential active regions. The characterization of the vertical electric current density for a set of vector magnetograms using this method then gives the spatial scale, locations, and magnitude of these current systems. The method, which employs an intermediate parametric function which covers the magnetogram and which defines the local "preferred" direction, minimizes a specific functional of the observed transverse magnetic field. The specific functional that is successful is the integral of the square of the vertical current density. We find that the vertical electric current densities have common characteristics for the extended bipolar beta gamma delta-regions studied. The largest current systems have j(sub z)'s which maximizes around 30 mA per square meter and have a linear decreasing distribution to a diameter of 30 Mm.

  2. A Review of Flow Analysis Methods for Determination of Radionuclides in Nuclear Wastes and Nuclear Reactor Coolants

    DOE PAGES

    Trojanowicz, Marek; Kolacinska, Kamila; Grate, Jay W.

    2018-02-13

    Here, the safety and security of nuclear power plant operations depend on the application of the most appropriate techniques and methods of chemical analysis, where modern flow analysis methods prevail. Nevertheless, the current status of the development of these methods is more limited than it might be expected based on their genuine advantages. The main aim of this paper is to review the automated flow analysis procedures developed with various detection methods for the nuclear energy industry. The flow analysis methods for the determination of radionuclides, that have been reported to date, are primarily focused on their environmental applications. Themore » benefits of the application of flow methods in both monitoring of the nuclear wastes and process analysis of the primary circuit coolants of light water nuclear reactors will also be discussed. The application of either continuous flow methods (CFA) or injection methods (FIA, SIA) of the flow analysis with the β–radiometric detection shortens the analysis time and improves the precision of determination due to mechanization of certain time-consuming operations of the sample processing. Compared to the radiometric detection, the mass spectrometry (MS) detection enables one to perform multicomponent analyses as well as the determination of transuranic isotopes with much better limits of detection.« less

  3. A Review of Flow Analysis Methods for Determination of Radionuclides in Nuclear Wastes and Nuclear Reactor Coolants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trojanowicz, Marek; Kolacinska, Kamila; Grate, Jay W.

    Here, the safety and security of nuclear power plant operations depend on the application of the most appropriate techniques and methods of chemical analysis, where modern flow analysis methods prevail. Nevertheless, the current status of the development of these methods is more limited than it might be expected based on their genuine advantages. The main aim of this paper is to review the automated flow analysis procedures developed with various detection methods for the nuclear energy industry. The flow analysis methods for the determination of radionuclides, that have been reported to date, are primarily focused on their environmental applications. Themore » benefits of the application of flow methods in both monitoring of the nuclear wastes and process analysis of the primary circuit coolants of light water nuclear reactors will also be discussed. The application of either continuous flow methods (CFA) or injection methods (FIA, SIA) of the flow analysis with the β–radiometric detection shortens the analysis time and improves the precision of determination due to mechanization of certain time-consuming operations of the sample processing. Compared to the radiometric detection, the mass spectrometry (MS) detection enables one to perform multicomponent analyses as well as the determination of transuranic isotopes with much better limits of detection.« less

  4. A review of flow analysis methods for determination of radionuclides in nuclear wastes and nuclear reactor coolants.

    PubMed

    Trojanowicz, Marek; Kołacińska, Kamila; Grate, Jay W

    2018-06-01

    The safety and security of nuclear power plant operations depend on the application of the most appropriate techniques and methods of chemical analysis, where modern flow analysis methods prevail. Nevertheless, the current status of the development of these methods is more limited than it might be expected based on their genuine advantages. The main aim of this paper is to review the automated flow analysis procedures developed with various detection methods for the nuclear energy industry. The flow analysis methods for the determination of radionuclides, that have been reported to date, are primarily focused on their environmental applications. The benefits of the application of flow methods in both monitoring of the nuclear wastes and process analysis of the primary circuit coolants of light water nuclear reactors will also be discussed. The application of either continuous flow methods (CFA) or injection methods (FIA, SIA) of the flow analysis with the β-radiometric detection shortens the analysis time and improves the precision of determination due to mechanization of certain time-consuming operations of the sample processing. Compared to the radiometric detection, the mass spectrometry (MS) detection enables one to perform multicomponent analyses as well as the determination of transuranic isotopes with much better limits of detection. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Mega-Analysis of School Psychology Blueprint for Training and Practice Domains

    ERIC Educational Resources Information Center

    Burns, Matthew K.; Kanive, Rebecca; Zaslofsky, Anne F.; Parker, David C.

    2013-01-01

    Meta-analytic research is an effective method for synthesizing existing research and for informing practice and policy. Hattie (2009) suggested that meta-analytic procedures could be employed to existing meta-analyses to create a mega-analysis. The current mega-analysis examined a sample of 47 meta-analyses according to the "School…

  6. Item Factor Analysis: Current Approaches and Future Directions

    ERIC Educational Resources Information Center

    Wirth, R. J.; Edwards, Michael C.

    2007-01-01

    The rationale underlying factor analysis applies to continuous and categorical variables alike; however, the models and estimation methods for continuous (i.e., interval or ratio scale) data are not appropriate for item-level data that are categorical in nature. The authors provide a targeted review and synthesis of the item factor analysis (IFA)…

  7. 49 CFR Appendix D to Part 172 - Rail Risk Analysis Factors

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... nature of the rail system, each carrier must select and document the analysis method/model used and identify the routes to be analyzed. D. The safety and security risk analysis must consider current data and... curvature; 7. Presence or absence of signals and train control systems along the route (“dark” versus...

  8. Discovering and Analyzing Deviant Communities: Methods and Experiments

    DTIC Science & Technology

    2014-10-01

    analysis. Sinkholing . Sinkholing is the current method of choice for botnet analysis and defense [3]. In this approach, the analyst deceives bots into...from the bots to the botnet. There are several drawbacks to sinkholing and shutting down botnets. The biggest issue is the complexity and time...involved in conducting a sinkhol - ing campaign. Normally, sinkholing involves a coordinated effort from the analyst, ISPs, and law enforcement officials

  9. Cost-effectiveness analysis of acute kidney injury biomarkers in pediatric cardiac surgery.

    PubMed

    Petrovic, Stanislava; Bogavac-Stanojevic, Natasa; Lakic, Dragana; Peco-Antic, Amira; Vulicevic, Irena; Ivanisevic, Ivana; Kotur-Stevuljevic, Jelena; Jelic-Ivanovic, Zorana

    2015-01-01

    Acute kidney injury (AKI) is significant problem in children with congenital heart disease (CHD) who undergo cardiac surgery. The economic impact of a biomarker-based diagnostic strategy for AKI in pediatric populations undergoing CHD surgery is unknown. The aim of this study was to perform the cost effectiveness analysis of using serum cystatin C (sCysC), urine neutrophil gelatinase-associated lipocalin (uNGAL) and urine liver fatty acid-binding protein (uL-FABP) for the diagnosis of AKI in children after cardiac surgery compared with current diagnostic method (monitoring of serum creatinine (sCr) level). We developed a decision analytical model to estimate incremental cost-effectiveness of different biomarker-based diagnostic strategies compared to current diagnostic strategy. The Markov model was created to compare the lifetime cost associated with using of sCysC, uNGAL, uL-FABP with monitoring of sCr level for the diagnosis of AKI. The utility measurement included in the analysis was quality-adjusted life years (QALY). The results of the analysis are presented as the incremental cost-effectiveness ratio (ICER). Analysed biomarker-based diagnostic strategies for AKI were cost-effective compared to current diagnostic method. However, uNGAL and sCys C strategies yielded higher costs and lower effectiveness compared to uL-FABP strategy. uL-FABP added 1.43 QALY compared to current diagnostic method at an additional cost of $8521.87 per patient. Therefore, ICER for uL-FABP compared to sCr was $5959.35/QALY. Our results suggest that the use of uL-FABP would represent cost effective strategy for early diagnosis of AKI in children after cardiac surgery.

  10. Temperature-dependent analysis of conduction mechanism of leakage current in thermally grown oxide on 4H-SiC

    NASA Astrophysics Data System (ADS)

    Sometani, Mitsuru; Okamoto, Dai; Harada, Shinsuke; Ishimori, Hitoshi; Takasu, Shinji; Hatakeyama, Tetsuo; Takei, Manabu; Yonezawa, Yoshiyuki; Fukuda, Kenji; Okumura, Hajime

    2015-01-01

    The conduction mechanism of the leakage current of a thermally grown oxide on 4H silicon carbide (4H-SiC) was investigated. The dominant carriers of the leakage current were found to be electrons by the carrier-separation current-voltage method. The current-voltage and capacitance-voltage characteristics, which were measured over a wide temperature range, revealed that the leakage current in SiO2/4H-SiC on the Si-face can be explained as the sum of the Fowler-Nordheim (FN) tunneling and Poole-Frenkel (PF) emission leakage currents. A rigorous FN analysis provided the true barrier height for the SiO2/4H-SiC interface. On the basis of Arrhenius plots of the PF current separated from the total leakage current, the existence of carbon-related defects and/or oxygen vacancy defects was suggested in thermally grown SiO2 films on the Si-face of 4H-SiC.

  11. Temperature-dependent analysis of conduction mechanism of leakage current in thermally grown oxide on 4H-SiC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sometani, Mitsuru; Takei, Manabu; Fuji Electric Co. Ltd., 1 Fuji-machi, Hino, 191-8502 Tokyo

    The conduction mechanism of the leakage current of a thermally grown oxide on 4H silicon carbide (4H-SiC) was investigated. The dominant carriers of the leakage current were found to be electrons by the carrier-separation current-voltage method. The current-voltage and capacitance-voltage characteristics, which were measured over a wide temperature range, revealed that the leakage current in SiO{sub 2}/4H-SiC on the Si-face can be explained as the sum of the Fowler-Nordheim (FN) tunneling and Poole-Frenkel (PF) emission leakage currents. A rigorous FN analysis provided the true barrier height for the SiO{sub 2}/4H-SiC interface. On the basis of Arrhenius plots of the PFmore » current separated from the total leakage current, the existence of carbon-related defects and/or oxygen vacancy defects was suggested in thermally grown SiO{sub 2} films on the Si-face of 4H-SiC.« less

  12. The Chimera Method of Simulation for Unsteady Three-Dimensional Viscous Flow

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1996-01-01

    The Chimera overset grid method is reviewed and discussed in the context of a method of solution and analysis of unsteady three-dimensional viscous flows. The state of maturity of the various pieces of support software required to use the approach is discussed. A variety of recent applications of the method is presented. Current limitations of the approach are defined.

  13. Experience report: Using formal methods for requirements analysis of critical spacecraft software

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.; Ampo, Yoko

    1994-01-01

    Formal specification and analysis of requirements continues to gain support as a method for producing more reliable software. However, the introduction of formal methods to a large software project is difficult, due in part to the unfamiliarity of the specification languages and the lack of graphics. This paper reports results of an investigation into the effectiveness of formal methods as an aid to the requirements analysis of critical, system-level fault-protection software on a spacecraft currently under development. Our experience indicates that formal specification and analysis can enhance the accuracy of the requirements and add assurance prior to design development in this domain. The work described here is part of a larger, NASA-funded research project whose purpose is to use formal-methods techniques to improve the quality of software in space applications. The demonstration project described here is part of the effort to evaluate experimentally the effectiveness of supplementing traditional engineering approaches to requirements specification with the more rigorous specification and analysis available with formal methods.

  14. An investigation of improved airbag performance by vent control and gas injection

    NASA Astrophysics Data System (ADS)

    Lee, Calvin; Rosato, Nick; Lai, Francis

    Airbags are currently being investigated as an impact energy absorber for U.S. Army airdrop. Simple airbags with constant vent areas have been found to be unsatisfactory in yielding high G forces. In this paper, a method of controlling the vent area and a method of injecting gas into the airbag during its compression stroke to improve airbag performance are presented. Theoretical analysis of complex airbags using these two methods show that they provide lower G forces than simple airbags. Vertical drop tests of a vent-control airbag confirm this result. Gas-injection airbags are currently being tested.

  15. Qualitative Analysis: The Current Status.

    ERIC Educational Resources Information Center

    Cole, G. Mattney, Jr.; Waggoner, William H.

    1983-01-01

    To assist in designing/implementing qualitative analysis courses, examines reliability/accuracy of several published separation schemes, notes methods where particular difficulties arise (focusing on Groups II/III), and presents alternative schemes for the separation of these groups. Only cation analyses are reviewed. Figures are presented in…

  16. Systematic Review and Meta-Analysis of Studies Evaluating Diagnostic Test Accuracy: A Practical Review for Clinical Researchers-Part II. Statistical Methods of Meta-Analysis

    PubMed Central

    Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107

  17. Commercial transport aircraft composite structures

    NASA Technical Reports Server (NTRS)

    Mccarty, J. E.

    1983-01-01

    The role that analysis plays in the development, production, and substantiation of aircraft structures is discussed. The types, elements, and applications of failure that are used and needed; the current application of analysis methods to commercial aircraft advanced composite structures, along with a projection of future needs; and some personal thoughts on analysis development goals and the elements of an approach to analysis development are discussed.

  18. Coupled Electro-Magneto-Mechanical-Acoustic Analysis Method Developed by Using 2D Finite Element Method for Flat Panel Speaker Driven by Magnetostrictive-Material-Based Actuator

    NASA Astrophysics Data System (ADS)

    Yoo, Byungjin; Hirata, Katsuhiro; Oonishi, Atsurou

    In this study, a coupled analysis method for flat panel speakers driven by giant magnetostrictive material (GMM) based actuator was developed. The sound field produced by a flat panel speaker that is driven by a GMM actuator depends on the vibration of the flat panel, this vibration is a result of magnetostriction property of the GMM. In this case, to predict the sound pressure level (SPL) in the audio-frequency range, it is necessary to take into account not only the magnetostriction property of the GMM but also the effect of eddy current and the vibration characteristics of the actuator and the flat panel. In this paper, a coupled electromagnetic-structural-acoustic analysis method is presented; this method was developed by using the finite element method (FEM). This analysis method is used to predict the performance of a flat panel speaker in the audio-frequency range. The validity of the analysis method is verified by comparing with the measurement results of a prototype speaker.

  19. Instrumentation for motor-current signature analysis using synchronous sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castleberry, K.N.

    1996-07-01

    Personnel in the Instrumentation and Controls Division at Oak Ridge National Laboratory, in association with the United States Enrichment Corporation, the U.S. Navy, and various Department of Energy sponsors, have been involved in the development and application of motor-current signature analysis for several years. In that time, innovation in the field has resulted in major improvements in signal processing, analysis, and system performance and capabilities. Recent work has concentrated on industrial implementation of one of the most promising new techniques. This report describes the developed method and the instrumentation package that is being used to investigate and develop potential applications.

  20. Development of Theoretical and Numerical Techniques for Achieving Stability in Gyrotron Traveling-Wave Amplifiers.

    DTIC Science & Technology

    1989-02-01

    analysis methods diverge significantly. The electron current density found in Eq. 2.106 may be evaluated" as I J ...S..Y.v Yvt r t) (2.107) 0 ZO where 10...will be specified by the geometry and mode under consider- ation. It was noted earlier that the point of divergence between the two principle...techniques lies in the methods used to calculate the current density. Actually, the divergence is present only in theory. Theoreti- cally and numerically, Eq

  1. Application and experimental validation of an integral method for simulation of gradient-induced eddy currents on conducting surfaces during magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Harris, C. T.; Haw, D. W.; Handler, W. B.; Chronik, B. A.

    2013-06-01

    The time-varying magnetic fields created by the gradient coils in magnetic resonance imaging can produce negative effects on image quality and the system itself. Additionally, they can be a limiting factor to the introduction of non-MR devices such as cardiac pacemakers, orthopedic implants, and surgical robotics. The ability to model the induced currents produced by the switching gradient fields is key to developing methods for reducing these unwanted interactions. In this work, a framework for the calculation of induced currents on conducting surface geometries is summarized. This procedure is then compared to two separate experiments: (1) the analysis of the decay of currents induced upon a conducting cylinder by an insert gradient set within a head only 7 T MR scanner; and (2) analysis of the heat deposited into a small conductor by a uniform switching magnetic field at multiple frequencies and two distinct conductor thicknesses. The method was shown to allow the accurate modeling of the induced time-varying field decay in the first case, and was able to provide accurate estimation of the rise in temperature in the second experiment to within 30% when the skin depth was greater than or equal to the thickness of the conductor.

  2. Interactive-predictive detection of handwritten text blocks

    NASA Astrophysics Data System (ADS)

    Ramos Terrades, O.; Serrano, N.; Gordó, A.; Valveny, E.; Juan, A.

    2010-01-01

    A method for text block detection is introduced for old handwritten documents. The proposed method takes advantage of sequential book structure, taking into account layout information from pages previously transcribed. This glance at the past is used to predict the position of text blocks in the current page with the help of conventional layout analysis methods. The method is integrated into the GIDOC prototype: a first attempt to provide integrated support for interactive-predictive page layout analysis, text line detection and handwritten text transcription. Results are given in a transcription task on a 764-page Spanish manuscript from 1891.

  3. What’s in a game? A systems approach to enhancing performance analysis in football

    PubMed Central

    2017-01-01

    Purpose Performance analysis (PA) in football is considered to be an integral component of understanding the requirements for optimal performance. Despite vast amounts of research in this area key gaps remain, including what comprises PA in football, and methods to minimise research-practitioner gaps. The aim of this study was to develop a model of the football match system in order to better describe and understand the components of football performance. Such a model could inform the design of new PA methods. Method Eight elite level football Subject Method Experts (SME’s) participated in two workshops to develop a systems model of the football match system. The model was developed using a first-of-its-kind application of Cognitive Work Analysis (CWA) in football. CWA has been used in many other non-sporting domains to analyse and understand complex systems. Result Using CWA, a model of the football match ‘system’ was developed. The model enabled identification of several PA measures not currently utilised, including communication between team members, adaptability of teams, playing at the appropriate tempo, as well as attacking and defending related measures. Conclusion The results indicate that football is characteristic of a complex sociotechnical system, and revealed potential new and unique PA measures regarded as important by SME’s, yet not currently measured. Importantly, these results have identified a gap between the current PA research and the information that is meaningful to football coaches and practitioners. PMID:28212392

  4. Measurement and statistical analysis of single-molecule current-voltage characteristics, transition voltage spectroscopy, and tunneling barrier height.

    PubMed

    Guo, Shaoyin; Hihath, Joshua; Díez-Pérez, Ismael; Tao, Nongjian

    2011-11-30

    We report on the measurement and statistical study of thousands of current-voltage characteristics and transition voltage spectra (TVS) of single-molecule junctions with different contact geometries that are rapidly acquired using a new break junction method at room temperature. This capability allows one to obtain current-voltage, conductance voltage, and transition voltage histograms, thus adding a new dimension to the previous conductance histogram analysis at a fixed low-bias voltage for single molecules. This method confirms the low-bias conductance values of alkanedithiols and biphenyldithiol reported in literature. However, at high biases the current shows large nonlinearity and asymmetry, and TVS allows for the determination of a critically important parameter, the tunneling barrier height or energy level alignment between the molecule and the electrodes of single-molecule junctions. The energy level alignment is found to depend on the molecule and also on the contact geometry, revealing the role of contact geometry in both the contact resistance and energy level alignment of a molecular junction. Detailed statistical analysis further reveals that, despite the dependence of the energy level alignment on contact geometry, the variation in single-molecule conductance is primarily due to contact resistance rather than variations in the energy level alignment.

  5. Measurement of toroidal vessel eddy current during plasma disruption on J-TEXT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, L. J.; Yu, K. X.; Zhang, M., E-mail: zhangming@hust.edu.cn

    2016-01-15

    In this paper, we have employed a thin, printed circuit board eddy current array in order to determine the radial distribution of the azimuthal component of the eddy current density at the surface of a steel plate. The eddy current in the steel plate can be calculated by analytical methods under the simplifying assumptions that the steel plate is infinitely large and the exciting current is of uniform distribution. The measurement on the steel plate shows that this method has high spatial resolution. Then, we extended this methodology to a toroidal geometry with the objective of determining the poloidal distributionmore » of the toroidal component of the eddy current density associated with plasma disruption in a fusion reactor called J-TEXT. The preliminary measured result is consistent with the analysis and calculation results on the J-TEXT vacuum vessel.« less

  6. Method for removal of random noise in eddy-current testing system

    DOEpatents

    Levy, Arthur J.

    1995-01-01

    Eddy-current response voltages, generated during inspection of metallic structures for anomalies, are often replete with noise. Therefore, analysis of the inspection data and results is difficult or near impossible, resulting in inconsistent or unreliable evaluation of the structure. This invention processes the eddy-current response voltage, removing the effect of random noise, to allow proper identification of anomalies within and associated with the structure.

  7. Functional Group Analysis.

    ERIC Educational Resources Information Center

    Smith, Walter T., Jr.; Patterson, John M.

    1980-01-01

    Discusses analytical methods selected from current research articles. Groups information by topics of general interest, including acids, aldehydes and ketones, nitro compounds, phenols, and thiols. Cites 97 references. (CS)

  8. Evaluation of the tripolar electrode stimulation method by numerical analysis and animal experiments for cochlear implants.

    PubMed

    Miyoshi, S; Sakajiri, M; Ifukube, T; Matsushima, J

    1997-01-01

    We have proposed the Tripolar Electrode Stimulation Method (TESM) which may enable us to narrow the stimulation region and to move continuously the stimulation site for the cochlear implants. We evaluated whether or not TESM works according to a theory based on numerical analysis using the auditory nerve fiber model. In this simulation, the sum of the excited model fibers were compared with the compound actions potentials obtained from animal experiments. As a result, this experiment showed that TESM could narrow a stimulation region by controlling the sum of the currents emitted from the electrodes on both sides, and continuously move a stimulation site by changing the ratio of the currents emitted from the electrodes on both sides.

  9. Slope Stability Analysis of Waste Dump in Sandstone Open Pit Osielec

    NASA Astrophysics Data System (ADS)

    Adamczyk, Justyna; Cała, Marek; Flisiak, Jerzy; Kolano, Malwina; Kowalski, Michał

    2013-03-01

    This paper presents the slope stability analysis for the current as well as projected (final) geometry of waste dump Sandstone Open Pit "Osielec". For the stability analysis six sections were selected. Then, the final geometry of the waste dump was designed and the stability analysis was conducted. On the basis of the analysis results the opportunities to improve the stability of the object were identified. The next issue addressed in the paper was to determine the proportion of the mixture containing mining and processing wastes, for which the waste dump remains stable. Stability calculations were carried out using Janbu method, which belongs to the limit equilibrium methods.

  10. Development and Application of Fiber Bragg Grating Clinometer

    NASA Astrophysics Data System (ADS)

    Guo, Xin; Li, Wen; Wang, Wentao; Feng, Xiaoyu

    2017-06-01

    Using FBG (fiber bragg grating) technology in clinometers can solve the technological problem facing by wireless transmission devices like big data transfer volume and poor stability, which has been receiving more and more attention. This paper discusses a new clinometer that is designed and transformed based on upgrading current clinometers, installing fiber grating strain gauges and fiber thermometers, and carrying out studies on such aspects as equipment upgrading, on-site setting, and data acquisition and analysis. In addition, it brings up the method of calculating displacement change based on wavelength change; this method is used in safety monitoring of the right side slope of Longyong Expressway ZK56+860 ~ ZK56+940 Section. Data shows that the device is operating well with a higher accuracy, and the slope is currently in a steady state. The equipment improvement and the method together provide reference data for safety analysis of the side slope.

  11. Nozzle Initiative Industry Advisory Committee on Standardization of Carbon-Phenolic Test Methods and Specifications

    NASA Technical Reports Server (NTRS)

    Bull, William B. (Compiler); Pinoli, Pat C. (Compiler); Upton, Cindy G. (Compiler); Day, Tony (Compiler); Hill, Keith (Compiler); Stone, Frank (Compiler); Hall, William B.

    1994-01-01

    This report is a compendium of the presentations of the 12th biannual meeting of the Industry Advisory Committee under the Solid Propulsion Integrity Program. A complete transcript of the welcoming talks is provided. Presentation outlines and overheads are included for the other sessions: SPIP Overview, Past, Current and Future Activity; Test Methods Manual and Video Tape Library; Air Force Developed Computer Aided Cure Program and SPC/TQM Experience; Magneto-Optical mapper (MOM), Joint Army/NASA program to assess composite integrity; Permeability Testing; Moisture Effusion Testing by Karl Fischer Analysis; Statistical Analysis of Acceptance Test Data; NMR Phenolic Resin Advancement; Constituent Testing Highlights on the LDC Optimization Program; Carbon Sulfur Study, Performance Related Testing; Current Rayon Specifications and Future Availability; RSRM/SPC Implementation; SRM Test Methods, Delta/Titan/FBM/RSRM; and Open Forum on Performance Based Acceptance Testing -- Industry Experience.

  12. Pulsed Direct Current Electrospray: Enabling Systematic Analysis of Small Volume Sample by Boosting Sample Economy.

    PubMed

    Wei, Zhenwei; Xiong, Xingchuang; Guo, Chengan; Si, Xingyu; Zhao, Yaoyao; He, Muyi; Yang, Chengdui; Xu, Wei; Tang, Fei; Fang, Xiang; Zhang, Sichun; Zhang, Xinrong

    2015-11-17

    We had developed pulsed direct current electrospray ionization mass spectrometry (pulsed-dc-ESI-MS) for systematically profiling and determining components in small volume sample. Pulsed-dc-ESI utilized constant high voltage to induce the generation of single polarity pulsed electrospray remotely. This method had significantly boosted the sample economy, so as to obtain several minutes MS signal duration from merely picoliter volume sample. The elongated MS signal duration enable us to collect abundant MS(2) information on interested components in a small volume sample for systematical analysis. This method had been successfully applied for single cell metabolomics analysis. We had obtained 2-D profile of metabolites (including exact mass and MS(2) data) from single plant and mammalian cell, concerning 1034 components and 656 components for Allium cepa and HeLa cells, respectively. Further identification had found 162 compounds and 28 different modification groups of 141 saccharides in a single Allium cepa cell, indicating pulsed-dc-ESI a powerful tool for small volume sample systematical analysis.

  13. Analysis of Bose system in spin-orbit coupled Bose-Fermi mixture to induce a spin current of fermions

    NASA Astrophysics Data System (ADS)

    Sakamoto, R.; Ono, Y.; Hatsuda, R.; Shiina, K.; Arahata, E.; Mori, H.

    2018-03-01

    We found that a spin current of fermions could be induced in spin-orbit coupled Bose-Fermi mixture at zero temperature. Since spatial change of the spin structure of the bosons is necessary to induce the spin current of the fermions, we analyzed the ground state of the bosons in the mixture system, using a variational method. The obtained phase diagram indicated the presence of a bosonic phase that allowed the fermions to have a spin current.

  14. Current sensing using bismuth rare-earth iron garnet films

    NASA Astrophysics Data System (ADS)

    Ko, Michael; Garmire, Elsa

    1995-04-01

    Ferrimagnetic iron garnet films are investigated as current-sensing elements. The Faraday effect within the films permits measurement of the magnetic field or current by a simple polarimetric technique. Polarized diffraction patterns from the films have been observed that arise from the presence of magnetic domains in the films. A physical model for the diffraction is discussed, and results from a mathematical analysis are in good agreement with the experimental observations. A method of current sensing that uses this polarized diffraction is demonstrated.

  15. Developing Tools for Research on School Leadership Development: An Illustrative Case of a Computer Simulation

    ERIC Educational Resources Information Center

    Showanasai, Parinya; Lu, Jiafang; Hallinger, Philip

    2013-01-01

    Purpose: The extant literature on school leadership development is dominated by conceptual analysis, descriptive studies of current practice, critiques of current practice, and prescriptions for better ways to approach practice. Relatively few studies have examined impact of leadership development using experimental methods, among which even fewer…

  16. A constant current charge technique for low Earth orbit life testing

    NASA Technical Reports Server (NTRS)

    Glueck, Peter

    1991-01-01

    A constant current charge technique for low earth orbit testing of nickel cadmium cells is presented. The method mimics the familiar taper charge of the constant potential technique while maintaining cell independence for statistical analysis. A detailed example application is provided and the advantages and disadvantages of this technique are discussed.

  17. The sweet tooth of biopharmaceuticals: importance of recombinant protein glycosylation analysis.

    PubMed

    Lingg, Nico; Zhang, Peiqing; Song, Zhiwei; Bardor, Muriel

    2012-12-01

    Biopharmaceuticals currently represent the fastest growing sector of the pharmaceutical industry, mainly driven by a rapid expansion in the manufacture of recombinant protein-based drugs. Glycosylation is the most prominent post-translational modification occurring on these protein drugs. It constitutes one of the critical quality attributes that requires thorough analysis for optimal efficacy and safety. This review examines the functional importance of glycosylation of recombinant protein drugs, illustrated using three examples of protein biopharmaceuticals: IgG antibodies, erythropoietin and glucocerebrosidase. Current analytical methods are reviewed as solutions for qualitative and quantitative measurements of glycosylation to monitor quality target product profiles of recombinant glycoprotein drugs. Finally, we propose a framework for designing the quality target product profile of recombinant glycoproteins and planning workflow for glycosylation analysis with the selection of available analytical methods and tools. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Potential artifacts associated with historical preparation of joint compound samples and reported airborne asbestos concentrations.

    PubMed

    Brorby, G P; Sheehan, P J; Berman, D W; Bogen, K T; Holm, S E

    2011-05-01

    Airborne samples collected in the 1970s for drywall workers using asbestos-containing joint compounds were likely prepared and analyzed according to National Institute of Occupational Safety and Health Method P&CAM 239, the historical precursor to current Method 7400. Experimentation with a re-created, chrysotile-containing, carbonate-based joint compound suggested that analysis following sample preparation by the historical vs. current method produces different fiber counts, likely because of an interaction between the different clearing and mounting chemicals used and the carbonate-based joint compound matrix. Differences were also observed during analysis using Method 7402, depending on whether acetic acid/dimethylformamide or acetone was used during preparation to collapse the filter. Specifically, air samples of sanded chrysotile-containing joint compound prepared by the historical method yielded fiber counts significantly greater (average of 1.7-fold, 95% confidence interval: 1.5- to 2.0-fold) than those obtained by the current method. In addition, air samples prepared by Method 7402 using acetic acid/dimethylformamide yielded fiber counts that were greater (2.8-fold, 95% confidence interval: 2.5- to 3.2-fold) than those prepared by this method using acetone. These results indicated (1) there is an interaction between Method P&CAM 239 preparation chemicals and the carbonate-based joint compound matrix that reveals fibers that were previously bound in the matrix, and (2) the same appeared to be true for Method 7402 preparation chemicals acetic acid/dimethylformamide. This difference in fiber counts is the opposite of what has been reported historically for samples of relatively pure chrysotile dusts prepared using the same chemicals. This preparation artifact should be considered when interpreting historical air samples for drywall workers prepared by Method P&CAM 239. Copyright © 2011 JOEH, LLC

  19. New method for designing serial resonant power converters

    NASA Astrophysics Data System (ADS)

    Hinov, Nikolay

    2017-12-01

    In current work is presented one comprehensive method for design of serial resonant energy converters. The method is based on new simplified approach in analysis of such kind power electronic devices. It is grounded on supposing resonant mode of operation when finding relation between input and output voltage regardless of other operational modes (when controlling frequency is below or above resonant frequency). This approach is named `quasiresonant method of analysis', because it is based on assuming that all operational modes are `sort of' resonant modes. An estimation of error was made because of the a.m. hypothesis and is compared to the classic analysis. The `quasiresonant method' of analysis gains two main advantages: speed and easiness in designing of presented power circuits. Hence it is very useful in practice and in teaching Power Electronics. Its applicability is proven with mathematic modelling and computer simulation.

  20. Computer-based quantitative computed tomography image analysis in idiopathic pulmonary fibrosis: A mini review.

    PubMed

    Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio

    2018-01-01

    Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  1. Comparative analysis of targeted metabolomics: dominance-based rough set approach versus orthogonal partial least square-discriminant analysis.

    PubMed

    Blasco, H; Błaszczyński, J; Billaut, J C; Nadal-Desbarats, L; Pradat, P F; Devos, D; Moreau, C; Andres, C R; Emond, P; Corcia, P; Słowiński, R

    2015-02-01

    Metabolomics is an emerging field that includes ascertaining a metabolic profile from a combination of small molecules, and which has health applications. Metabolomic methods are currently applied to discover diagnostic biomarkers and to identify pathophysiological pathways involved in pathology. However, metabolomic data are complex and are usually analyzed by statistical methods. Although the methods have been widely described, most have not been either standardized or validated. Data analysis is the foundation of a robust methodology, so new mathematical methods need to be developed to assess and complement current methods. We therefore applied, for the first time, the dominance-based rough set approach (DRSA) to metabolomics data; we also assessed the complementarity of this method with standard statistical methods. Some attributes were transformed in a way allowing us to discover global and local monotonic relationships between condition and decision attributes. We used previously published metabolomics data (18 variables) for amyotrophic lateral sclerosis (ALS) and non-ALS patients. Principal Component Analysis (PCA) and Orthogonal Partial Least Square-Discriminant Analysis (OPLS-DA) allowed satisfactory discrimination (72.7%) between ALS and non-ALS patients. Some discriminant metabolites were identified: acetate, acetone, pyruvate and glutamine. The concentrations of acetate and pyruvate were also identified by univariate analysis as significantly different between ALS and non-ALS patients. DRSA correctly classified 68.7% of the cases and established rules involving some of the metabolites highlighted by OPLS-DA (acetate and acetone). Some rules identified potential biomarkers not revealed by OPLS-DA (beta-hydroxybutyrate). We also found a large number of common discriminating metabolites after Bayesian confirmation measures, particularly acetate, pyruvate, acetone and ascorbate, consistent with the pathophysiological pathways involved in ALS. DRSA provides a complementary method for improving the predictive performance of the multivariate data analysis usually used in metabolomics. This method could help in the identification of metabolites involved in disease pathogenesis. Interestingly, these different strategies mostly identified the same metabolites as being discriminant. The selection of strong decision rules with high value of Bayesian confirmation provides useful information about relevant condition-decision relationships not otherwise revealed in metabolomics data. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Compensation of hospital-based physicians.

    PubMed Central

    Steinwald, B

    1983-01-01

    This study is concerned with methods of compensating hospital-based physicians (HBPs) in five medical specialties: anesthesiology, pathology, radiology, cardiology, and emergency medicine. Data on 2232 nonfederal, short-term general hospitals came from a mail questionnaire survey conducted in Fall 1979. The data indicate that numerous compensation methods exist but these methods, without much loss of precision, can be reduced to salary, percentage of department revenue, and fee-for-service. When HBPs are compensated by salary or percentage methods, most patient billing is conducted by the hospital. In contrast, most fee-for-service HBPs bill their patients directly. Determinants of HBP compensation methods are investigated via multinomial logit analysis. This analysis indicates that choice of HBP compensation methods are investigated via multinomial logit analysis. This analysis indicates that choice of HBP compensation methods is sensitive to a number of hospital characteristics and attributes of both the hospital and physicians' services markets. The empirical findings are discussed in light of past conceptual and empirical research on physician compensation, and current policy issues in the health services sector. PMID:6841112

  3. A comparison of several techniques for imputing tree level data

    Treesearch

    David Gartner

    2002-01-01

    As Forest Inventory and Analysis (FIA) changes from periodic surveys to the multipanel annual survey, new analytical methods become available. The current official statistic is the moving average. One alternative is an updated moving average. Several methods of updating plot per acre volume have been discussed previously. However, these methods may not be appropriate...

  4. Range of sound levels in the outdoor environment

    Treesearch

    Lewis S. Goodfriend

    1977-01-01

    Current methods of measuring and rating noise in a metropolitan area are examined, including real-time spectrum analysis and sound-level integration, producing a single-number value representing the noise impact for each hour or each day. Methods of noise rating for metropolitan areas are reviewed, and the various measures from multidimensional rating methods such as...

  5. EVALUATION OF IODINE BASED IMPINGER SOLUTIONS FOR THE EFFICIENT CAPTURE OF HG USING DIRECT INJECTION NEBULIZATION INDUCTIVELY COUPLED PLASMA MASS SPECTROMETRY (DIN-ICP/MS) ANALYSIS

    EPA Science Inventory

    Currently there are no EPA reference sampling methods that have been promulgated for measuring stack emissions of Hg from coal combustion sources, however, EPA Method 29 is most commonly applied. The draft ASTM Ontario Hydro Method for measuring oxidized, elemental, particulate-b...

  6. Characterization of Low-Molecular-Weight Heparins by Strong Anion-Exchange Chromatography.

    PubMed

    Sadowski, Radosław; Gadzała-Kopciuch, Renata; Kowalkowski, Tomasz; Widomski, Paweł; Jujeczka, Ludwik; Buszewski, Bogusław

    2017-11-01

    Currently, detailed structural characterization of low-molecular-weight heparin (LMWH) products is an analytical subject of great interest. In this work, we carried out a comprehensive structural analysis of LMWHs and applied a modified pharmacopeial method, as well as methods developed by other researchers, to the analysis of novel biosimilar LMWH products; and, for the first time, compared the qualitative and quantitative composition of commercially available drugs (enoxaparin, nadroparin, and dalteparin). For this purpose, we used strong anion-exchange (SAX) chromatography with spectrophotometric detection because this method is more helpful, easier, and faster than other separation techniques for the detailed disaccharide analysis of new LMWH drugs. In addition, we subjected the obtained results to statistical analysis (factor analysis, t-test, and Newman-Keuls post hoc test).

  7. Measuring Melatonin in Humans

    PubMed Central

    Benloucif, Susan; Burgess, Helen J.; Klerman, Elizabeth B.; Lewy, Alfred J.; Middleton, Benita; Murphy, Patricia J.; Parry, Barbara L.; Revell, Victoria L.

    2008-01-01

    Study Objectives: To provide guidelines for collecting and analyzing urinary, salivary, and plasma melatonin, thereby assisting clinicians and researchers in determining which method of measuring melatonin is most appropriate for their particular needs and facilitating the comparison of data between laboratories. Methods: A modified RAND process was utilized to derive recommendations for methods of measuring melatonin in humans. Results: Consensus-based guidelines are presented for collecting and analyzing melatonin for studies that are conducted in the natural living environment, the clinical setting, and in-patient research facilities under controlled conditions. Conclusions: The benefits and disadvantages of current methods of collecting and analyzing melatonin are summarized. Although a single method of analysis would be the most effective way to compare studies, limitations of current methods preclude this possibility. Given that the best analysis method for use under multiple conditions is not established, it is recommended to include, in any published report, one of the established low threshold measures of dim light melatonin onset to facilitate comparison between studies. Citation: Benloucif S; Burgess HJ; Klerman EB; Lewy AJ; Middleton B; Murphy PJ; Parry BL; Revell VL. Measuring melatonin in humans. J Clin Sleep Med 2008;4(1):66-69. PMID:18350967

  8. Droplet Microarray Based on Superhydrophobic-Superhydrophilic Patterns for Single Cell Analysis.

    PubMed

    Jogia, Gabriella E; Tronser, Tina; Popova, Anna A; Levkin, Pavel A

    2016-12-09

    Single-cell analysis provides fundamental information on individual cell response to different environmental cues and is a growing interest in cancer and stem cell research. However, current existing methods are still facing challenges in performing such analysis in a high-throughput manner whilst being cost-effective. Here we established the Droplet Microarray (DMA) as a miniaturized screening platform for high-throughput single-cell analysis. Using the method of limited dilution and varying cell density and seeding time, we optimized the distribution of single cells on the DMA. We established culturing conditions for single cells in individual droplets on DMA obtaining the survival of nearly 100% of single cells and doubling time of single cells comparable with that of cells cultured in bulk cell population using conventional methods. Our results demonstrate that the DMA is a suitable platform for single-cell analysis, which carries a number of advantages compared with existing technologies allowing for treatment, staining and spot-to-spot analysis of single cells over time using conventional analysis methods such as microscopy.

  9. Multi-residue method for the analysis of 85 current-use and legacy pesticides in bed and suspended sediments

    USGS Publications Warehouse

    Smalling, K.L.; Kuivila, K.M.

    2008-01-01

    A multi-residue method was developed for the simultaneous determination of 85 current-use and legacy organochlorine pesticides in a single sediment sample. After microwave-assisted extraction, clean-up of samples was optimized using gel permeation chromatography and either stacked carbon and alumina solid-phase extraction cartridges or a deactivated Florisil column. Analytes were determined by gas chromatography with ion-trap mass spectrometry and electron capture detection. Method detection limits ranged from 0.6 to 8.9 ??g/kg dry weight. Bed and suspended sediments from a variety of locations were analyzed to validate the method and 29 pesticides, including at least 1 from every class, were detected.

  10. Analysis of munitions constituents in groundwater using a field-portable GC-MS.

    PubMed

    Bednar, A J; Russell, A L; Hayes, C A; Jones, W T; Tackett, P; Splichal, D E; Georgian, T; Parker, L V; Kirgan, R A; MacMillan, D K

    2012-05-01

    The use of munitions constituents (MCs) at military installations can produce soil and groundwater contamination that requires periodic monitoring even after training or manufacturing activities have ceased. Traditional groundwater monitoring methods require large volumes of aqueous samples (e.g., 2-4 L) to be shipped under chain of custody, to fixed laboratories for analysis. The samples must also be packed on ice and shielded from light to minimize degradation that may occur during transport and storage. The laboratory's turn-around time for sample analysis and reporting can be as long as 45 d. This process hinders the reporting of data to customers in a timely manner; yields data that are not necessarily representative of current site conditions owing to the lag time between sample collection and reporting; and incurs significant shipping costs for samples. The current work compares a field portable Gas Chromatograph-Mass Spectrometer (GC-MS) for analysis of MCs on-site with traditional laboratory-based analysis using High Performance Liquid Chromatography with UV absorption detection. The field method provides near real-time (within ~1 h of sampling) concentrations of MCs in groundwater samples. Mass spectrometry provides reliable confirmation of MCs and a means to identify unknown compounds that are potential false positives for methods with UV and other non-selective detectors. Published by Elsevier Ltd.

  11. Semiannual report, 1 April - 30 September 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The major categories of the current Institute for Computer Applications in Science and Engineering (ICASE) research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification problems, with emphasis on effective numerical methods; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software for parallel computers. Research in these areas is discussed.

  12. Current techniques for the real-time processing of complex radar signatures

    NASA Astrophysics Data System (ADS)

    Clay, E.

    A real-time processing technique has been developed for the microwave receiver of the Brahms radar station. The method allows such target signatures as the radar cross section (RCS) of the airframes and rotating parts, the one-dimensional tomography of aircraft, and the RCS of electromagnetic decoys to be characterized. The method allows optimization of experimental parameters including the analysis frequency band, the receiver gain, and the wavelength range of EM analysis.

  13. Evaluation of the direct and diffusion methods for the determination of fluoride content in table salt

    PubMed Central

    Martínez-Mier, E. Angeles; Soto-Rojas, Armando E.; Buckley, Christine M.; Margineda, Jorge; Zero, Domenick T.

    2010-01-01

    Objective The aim of this study was to assess methods currently used for analyzing fluoridated salt in order to identify the most useful method for this type of analysis. Basic research design Seventy-five fluoridated salt samples were obtained. Samples were analyzed for fluoride content, with and without pretreatment, using direct and diffusion methods. Element analysis was also conducted in selected samples. Fluoride was added to ultra pure NaCl and non-fluoridated commercial salt samples and Ca and Mg were added to fluoride samples in order to assess fluoride recoveries using modifications to the methods. Results Larger amounts of fluoride were found and recovered using diffusion than direct methods (96%–100% for diffusion vs. 67%–90% for direct). Statistically significant differences were obtained between direct and diffusion methods using different ion strength adjusters. Pretreatment methods reduced the amount of recovered fluoride. Determination of fluoride content was influenced both by the presence of NaCl and other ions in the salt. Conclusion Direct and diffusion techniques for analysis of fluoridated salt are suitable methods for fluoride analysis. The choice of method should depend on the purpose of the analysis. PMID:20088217

  14. A study of commuter airplane design optimization

    NASA Technical Reports Server (NTRS)

    Keppel, B. V.; Eysink, H.; Hammer, J.; Hawley, K.; Meredith, P.; Roskam, J.

    1978-01-01

    The usability of the general aviation synthesis program (GASP) was enhanced by the development of separate computer subroutines which can be added as a package to this assembly of computerized design methods or used as a separate subroutine program to compute the dynamic longitudinal, lateral-directional stability characteristics for a given airplane. Currently available analysis methods were evaluated to ascertain those most appropriate for the design functions which the GASP computerized design program performs. Methods for providing proper constraint and/or analysis functions for GASP were developed as well as the appropriate subroutines.

  15. Analysis of improved criteria for mold growth in ASHRAE standard 160 by comparison with field observations

    Treesearch

    Samuel V. Glass; Stanley D. Gatland II; Kohta Ueno; Christopher J. Schumacher

    2017-01-01

    ASHRAE Standard 160, Criteria for Moisture-Control Design Analysis in Buildings, was published in 2009. The standard sets criteria for moisture design loads, hygrothermal analysis methods, and satisfactory moisture performance of the building envelope. One of the evaluation criteria specifies conditions necessary to avoid mold growth. The current standard requires that...

  16. A Pocock Approach to Sequential Meta-Analysis of Clinical Trials

    ERIC Educational Resources Information Center

    Shuster, Jonathan J.; Neu, Josef

    2013-01-01

    Three recent papers have provided sequential methods for meta-analysis of two-treatment randomized clinical trials. This paper provides an alternate approach that has three desirable features. First, when carried out prospectively (i.e., we only have the results up to the time of our current analysis), we do not require knowledge of the…

  17. Factor Analysis Methods and Validity Evidence: A Systematic Review of Instrument Development across the Continuum of Medical Education

    ERIC Educational Resources Information Center

    Wetzel, Angela Payne

    2011-01-01

    Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across…

  18. Family Early Literacy Practices Questionnaire: A Validation Study for a Spanish-Speaking Population

    ERIC Educational Resources Information Center

    Lewis, Kandia

    2012-01-01

    The purpose of the current study was to evaluate the psychometric validity of a Spanish translated version of a family involvement questionnaire (the FELP) using a mixed-methods design. Thus, statistical analyses (i.e., factor analysis, reliability analysis, and item analysis) and qualitative analyses (i.e., focus group data) were assessed.…

  19. Principal Component Analysis of Microbial Community Data from an Accelerated Decay Cellar Test

    Treesearch

    Grant T. Kirker; Patricia K. Lebow

    2014-01-01

    Analysis of microbial communities is a valuable tool for characterization and identification of microbes in a myriad of environments. We are currently using the molecular method terminal restriction fragment length polymorphism (T-RFLP) analysis to characterize changes in bacterial and fungal communities on treated and untreated wood in soil. T-RFLP uses fluorescently...

  20. Bridging the clinician/researcher gap with systemic research: the case for process research, dyadic, and sequential analysis.

    PubMed

    Oka, Megan; Whiting, Jason

    2013-01-01

    In Marriage and Family Therapy (MFT), as in many clinical disciplines, concern surfaces about the clinician/researcher gap. This gap includes a lack of accessible, practical research for clinicians. MFT clinical research often borrows from the medical tradition of randomized control trials, which typically use linear methods, or follow procedures distanced from "real-world" therapy. We review traditional research methods and their use in MFT and propose increased use of methods that are more systemic in nature and more applicable to MFTs: process research, dyadic data analysis, and sequential analysis. We will review current research employing these methods, as well as suggestions and directions for further research. © 2013 American Association for Marriage and Family Therapy.

  1. Probing of multiple magnetic responses in magnetic inductors using atomic force microscopy.

    PubMed

    Park, Seongjae; Seo, Hosung; Seol, Daehee; Yoon, Young-Hwan; Kim, Mi Yang; Kim, Yunseok

    2016-02-08

    Even though nanoscale analysis of magnetic properties is of significant interest, probing methods are relatively less developed compared to the significance of the technique, which has multiple potential applications. Here, we demonstrate an approach for probing various magnetic properties associated with eddy current, coil current and magnetic domains in magnetic inductors using multidimensional magnetic force microscopy (MMFM). The MMFM images provide combined magnetic responses from the three different origins, however, each contribution to the MMFM response can be differentiated through analysis based on the bias dependence of the response. In particular, the bias dependent MMFM images show locally different eddy current behavior with values dependent on the type of materials that comprise the MI. This approach for probing magnetic responses can be further extended to the analysis of local physical features.

  2. Measurement of neoclassically predicted edge current density at ASDEX Upgrade

    NASA Astrophysics Data System (ADS)

    Dunne, M. G.; McCarthy, P. J.; Wolfrum, E.; Fischer, R.; Giannone, L.; Burckhart, A.; the ASDEX Upgrade Team

    2012-12-01

    Experimental confirmation of neoclassically predicted edge current density in an ELMy H-mode plasma is presented. Current density analysis using the CLISTE equilibrium code is outlined and the rationale for accuracy of the reconstructions is explained. Sample profiles and time traces from analysis of data at ASDEX Upgrade are presented. A high time resolution is possible due to the use of an ELM-synchronization technique. Additionally, the flux-surface-averaged current density is calculated using a neoclassical approach. Results from these two separate methods are then compared and are found to validate the theoretical formula. Finally, several discharges are compared as part of a fuelling study, showing that the size and width of the edge current density peak at the low-field side can be explained by the electron density and temperature drives and their respective collisionality modifications.

  3. Exploratory factor analysis in Rehabilitation Psychology: a content analysis.

    PubMed

    Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N

    2014-11-01

    Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.

  4. Simplified welding distortion analysis for fillet welding using composite shell elements

    NASA Astrophysics Data System (ADS)

    Kim, Mingyu; Kang, Minseok; Chung, Hyun

    2015-09-01

    This paper presents the simplified welding distortion analysis method to predict the welding deformation of both plate and stiffener in fillet welds. Currently, the methods based on equivalent thermal strain like Strain as Direct Boundary (SDB) has been widely used due to effective prediction of welding deformation. Regarding the fillet welding, however, those methods cannot represent deformation of both members at once since the temperature degree of freedom is shared at the intersection nodes in both members. In this paper, we propose new approach to simulate deformation of both members. The method can simulate fillet weld deformations by employing composite shell element and using different thermal expansion coefficients according to thickness direction with fixed temperature at intersection nodes. For verification purpose, we compare of result from experiments, 3D thermo elastic plastic analysis, SDB method and proposed method. Compared of experiments results, the proposed method can effectively predict welding deformation for fillet welds.

  5. Costing 'healthy' food baskets in Australia - a systematic review of food price and affordability monitoring tools, protocols and methods.

    PubMed

    Lewis, Meron; Lee, Amanda

    2016-11-01

    To undertake a systematic review to determine similarities and differences in metrics and results between recently and/or currently used tools, protocols and methods for monitoring Australian healthy food prices and affordability. Electronic databases of peer-reviewed literature and online grey literature were systematically searched using the PRISMA approach for articles and reports relating to healthy food and diet price assessment tools, protocols, methods and results that utilised retail pricing. National, state, regional and local areas of Australia from 1995 to 2015. Assessment tools, protocols and methods to measure the price of 'healthy' foods and diets. The search identified fifty-nine discrete surveys of 'healthy' food pricing incorporating six major food pricing tools (those used in multiple areas and time periods) and five minor food pricing tools (those used in a single survey area or time period). Analysis demonstrated methodological differences regarding: included foods; reference households; use of availability and/or quality measures; household income sources; store sampling methods; data collection protocols; analysis methods; and results. 'Healthy' food price assessment methods used in Australia lack comparability across all metrics and most do not fully align with a 'healthy' diet as recommended by the current Australian Dietary Guidelines. None have been applied nationally. Assessment of the price, price differential and affordability of healthy (recommended) and current (unhealthy) diets would provide more robust and meaningful data to inform health and fiscal policy in Australia. The INFORMAS 'optimal' approach provides a potential framework for development of these methods.

  6. Evaluation and analysis of current compaction methods for FDOT pipe trench backfills in areas of high water tables

    DOT National Transportation Integrated Search

    1999-01-01

    This research project was undertaken to examine the practicality and adequacy of the FDOT specifications regarding compaction methods for pipe trench backfills under high water table. Given the difficulty to determine density and to attain desired de...

  7. STANDARDIZATION AND VALIDATION OF METHODS FOR ENUMERATION OF FECAL COLIFORM AND SALMONELLA IN BIOSOLIDS

    EPA Science Inventory

    Current federal regulations required monitoring for fecal coliforms or Salmonella in biosolids destined for land application. Methods used for analysis of fecal coliforms and Salmonella were reviewed and a standard protocol was developed. The protocols were then...

  8. STANDARDIZATION AND VALIDATION OF METHODS FOR ENUMERATION OF FECAL COLIFORM AND SALMONELLA IN BIOSOLIDS

    EPA Science Inventory

    Current federal regulations require monitoring for fecal coliforms or Salmonella in biosolids destined for land application. Methods used for analysis of fecal coliforms and Salmonella were reviewed and a standard protocol was developed. The protocols were then evaluated by testi...

  9. Current Technical Approaches for the Early Detection of Foodborne Pathogens: Challenges and Opportunities.

    PubMed

    Cho, Il-Hoon; Ku, Seockmo

    2017-09-30

    The development of novel and high-tech solutions for rapid, accurate, and non-laborious microbial detection methods is imperative to improve the global food supply. Such solutions have begun to address the need for microbial detection that is faster and more sensitive than existing methodologies (e.g., classic culture enrichment methods). Multiple reviews report the technical functions and structures of conventional microbial detection tools. These tools, used to detect pathogens in food and food homogenates, were designed via qualitative analysis methods. The inherent disadvantage of these analytical methods is the necessity for specimen preparation, which is a time-consuming process. While some literature describes the challenges and opportunities to overcome the technical issues related to food industry legal guidelines, there is a lack of reviews of the current trials to overcome technological limitations related to sample preparation and microbial detection via nano and micro technologies. In this review, we primarily explore current analytical technologies, including metallic and magnetic nanomaterials, optics, electrochemistry, and spectroscopy. These techniques rely on the early detection of pathogens via enhanced analytical sensitivity and specificity. In order to introduce the potential combination and comparative analysis of various advanced methods, we also reference a novel sample preparation protocol that uses microbial concentration and recovery technologies. This technology has the potential to expedite the pre-enrichment step that precedes the detection process.

  10. Uncertainty in the analysis of the overall equipment effectiveness on the shop floor

    NASA Astrophysics Data System (ADS)

    Rößler, M. P.; Abele, E.

    2013-06-01

    In this article an approach will be presented which supports transparency regarding the effectiveness of manufacturing equipment by combining the fuzzy set theory with the method of the overall equipment effectiveness analysis. One of the key principles of lean production and also a fundamental task in production optimization projects is the prior analysis of the current state of a production system by the use of key performance indicators to derive possible future states. The current state of the art in overall equipment effectiveness analysis is usually performed by cumulating different machine states by means of decentralized data collection without the consideration of uncertainty. In manual data collection or semi-automated plant data collection systems the quality of derived data often diverges and leads optimization teams to distorted conclusions about the real optimization potential of manufacturing equipment. The method discussed in this paper is to help practitioners to get more reliable results in the analysis phase and so better results of optimization projects. Under consideration of a case study obtained results are discussed.

  11. Assessment of current state of the art in modeling techniques and analysis methods for large space structures

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1983-01-01

    Advances in continuum modeling, progress in reduction methods, and analysis and modeling needs for large space structures are covered with specific attention given to repetitive lattice trusses. As far as continuum modeling is concerned, an effective and verified analysis capability exists for linear thermoelastic stress, birfurcation buckling, and free vibration problems of repetitive lattices. However, application of continuum modeling to nonlinear analysis needs more development. Reduction methods are very effective for bifurcation buckling and static (steady-state) nonlinear analysis. However, more work is needed to realize their full potential for nonlinear dynamic and time-dependent problems. As far as analysis and modeling needs are concerned, three areas are identified: loads determination, modeling and nonclassical behavior characteristics, and computational algorithms. The impact of new advances in computer hardware, software, integrated analysis, CAD/CAM stems, and materials technology is also discussed.

  12. Reconstruction of human brain spontaneous activity based on frequency-pattern analysis of magnetoencephalography data

    PubMed Central

    Llinás, Rodolfo R.; Ustinin, Mikhail N.; Rykunov, Stanislav D.; Boyko, Anna I.; Sychev, Vyacheslav V.; Walton, Kerry D.; Rabello, Guilherme M.; Garcia, John

    2015-01-01

    A new method for the analysis and localization of brain activity has been developed, based on multichannel magnetic field recordings, over minutes, superimposed on the MRI of the individual. Here, a high resolution Fourier Transform is obtained over the entire recording period, leading to a detailed multi-frequency spectrum. Further analysis implements a total decomposition of the frequency components into functionally invariant entities, each having an invariant field pattern localizable in recording space. The method, addressed as functional tomography, makes it possible to find the distribution of magnetic field sources in space. Here, the method is applied to the analysis of simulated data, to oscillating signals activating a physical current dipoles phantom, and to recordings of spontaneous brain activity in 10 healthy adults. In the analysis of simulated data, 61 dipoles are localized with 0.7 mm precision. Concerning the physical phantom the method is able to localize three simultaneously activated current dipoles with 1 mm precision. Spatial resolution 3 mm was attained when localizing spontaneous alpha rhythm activity in 10 healthy adults, where the alpha peak was specified for each subject individually. Co-registration of the functional tomograms with each subject's head MRI localized alpha range activity to the occipital and/or posterior parietal brain region. This is the first application of this new functional tomography to human brain activity. The method successfully provides an overall view of brain electrical activity, a detailed spectral description and, combined with MRI, the localization of sources in anatomical brain space. PMID:26528119

  13. Computerized spiral analysis using the iPad.

    PubMed

    Sisti, Jonathan A; Christophe, Brandon; Seville, Audrey Rakovich; Garton, Andrew L A; Gupta, Vivek P; Bandin, Alexander J; Yu, Qiping; Pullman, Seth L

    2017-01-01

    Digital analysis of writing and drawing has become a valuable research and clinical tool for the study of upper limb motor dysfunction in patients with essential tremor, Parkinson's disease, dystonia, and related disorders. We developed a validated method of computerized spiral analysis of hand-drawn Archimedean spirals that provides insight into movement dynamics beyond subjective visual assessment using a Wacom graphics tablet. While the Wacom tablet method provides robust data, more widely available mobile technology platforms exist. We introduce a novel adaptation of the Wacom-based method for the collection of hand-drawn kinematic data using an Apple iPad. This iPad-based system is stand-alone, easy-to-use, can capture drawing data with either a finger or capacitive stylus, is precise, and potentially ubiquitous. The iPad-based system acquires position and time data that is fully compatible with our original spiral analysis program. All of the important indices including degree of severity, speed, presence of tremor, tremor amplitude, tremor frequency, variability of pressure, and tightness are calculated from the digital spiral data, which the application is able to transmit. While the iPad method is limited by current touch screen technology, it does collect data with acceptable congruence compared to the current Wacom-based method while providing the advantages of accessibility and ease of use. The iPad is capable of capturing precise digital spiral data for analysis of motor dysfunction while also providing a convenient, easy-to-use modality in clinics and potentially at home. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Functional Analyses and Treatment of Precursor Behavior

    ERIC Educational Resources Information Center

    Najdowski, Adel C.; Wallace, Michele D.; Ellsworth, Carrie L.; MacAleese, Alicia N.; Cleveland, Jackie

    2008-01-01

    Functional analysis has been demonstrated to be an effective method to identify environmental variables that maintain problem behavior. However, there are cases when conducting functional analyses of severe problem behavior may be contraindicated. The current study applied functional analysis procedures to a class of behavior that preceded severe…

  15. SAMPLING AND ANALYSIS OF NANOMATERIALS IN THE ENVIRONMENT: A STATE-OF-THE-SCIENCE REVIEW

    EPA Science Inventory

    This state-of-the-science review was undertaken to identify and assess currently available sampling and analysis methods to identify and quantify the occurrence of nanomaterials in the environment. The environmental and human health risks associated with nanomaterials are largely...

  16. Tracking B-Cell Repertoires and Clonal Histories in Normal and Malignant Lymphocytes.

    PubMed

    Weston-Bell, Nicola J; Cowan, Graeme; Sahota, Surinder S

    2017-01-01

    Methods for tracking B-cell repertoires and clonal history in normal and malignant B-cells based on immunoglobulin variable region (IGV) gene analysis have developed rapidly with the advent of massive parallel next-generation sequencing (mpNGS) protocols. mpNGS permits a depth of analysis of IGV genes not hitherto feasible, and presents challenges of bioinformatics analysis, which can be readily met by current pipelines. This strategy offers a potential resolution of B-cell usage at a depth that may capture fully the natural state, in a given biological setting. Conventional methods based on RT-PCR amplification and Sanger sequencing are also available where mpNGS is not accessible. Each method offers distinct advantages. Conventional methods for IGV gene sequencing are readily adaptable to most laboratories and provide an ease of analysis to capture salient features of B-cell use. This chapter describes two methods in detail for analysis of IGV genes, mpNGS and conventional RT-PCR with Sanger sequencing.

  17. The need for a usable assessment tool to analyse the efficacy of emergency care systems in developing countries: proposal to use the TEWS methodology.

    PubMed

    Sun, Jared H; Twomey, Michele; Tran, Jeffrey; Wallis, Lee A

    2012-11-01

    Ninety percent of emergency incidents occur in developing countries, and this is only expected to get worse as these nations develop. As a result, governments in developing countries are establishing emergency care systems. However, there is currently no widely-usable, objective method to monitor or research the rapid growth of emergency care in the developing world. Analysis of current quantitative methods to assess emergency care in developing countries, and the proposal of a more appropriate method. Currently accepted methods to quantitatively assess the efficacy of emergency care systems cannot be performed in most developing countries due to weak record-keeping infrastructure and the inappropriateness of applying Western derived coefficients to developing country conditions. As a result, although emergency care in the developing world is rapidly growing, researchers and clinicians are unable to objectively measure its progress or determine which policies work best in their respective countries. We propose the TEWS methodology, a simple analytical tool that can be handled by low-resource, developing countries. By relying on the most basic universal parameters, simplest calculations and straightforward protocol, the TEWS methodology allows for widespread analysis of emergency care in the developing world. This could become essential in the establishment and growth of new emergency care systems worldwide.

  18. Atomistic cluster alignment method for local order mining in liquids and glasses

    NASA Astrophysics Data System (ADS)

    Fang, X. W.; Wang, C. Z.; Yao, Y. X.; Ding, Z. J.; Ho, K. M.

    2010-11-01

    An atomistic cluster alignment method is developed to identify and characterize the local atomic structural order in liquids and glasses. With the “order mining” idea for structurally disordered systems, the method can detect the presence of any type of local order in the system and can quantify the structural similarity between a given set of templates and the aligned clusters in a systematic and unbiased manner. Moreover, population analysis can also be carried out for various types of clusters in the system. The advantages of the method in comparison with other previously developed analysis methods are illustrated by performing the structural analysis for four prototype systems (i.e., pure Al, pure Zr, Zr35Cu65 , and Zr36Ni64 ). The results show that the cluster alignment method can identify various types of short-range orders (SROs) in these systems correctly while some of these SROs are difficult to capture by most of the currently available analysis methods (e.g., Voronoi tessellation method). Such a full three-dimensional atomistic analysis method is generic and can be applied to describe the magnitude and nature of noncrystalline ordering in many disordered systems.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  20. Experimental setup for the measurement of induction motor cage currents

    NASA Astrophysics Data System (ADS)

    Bottauscio, Oriano; Chiampi, Mario; Donadio, Lorenzo; Zucca, Mauro

    2005-04-01

    An experimental setup for measurement of the currents flowing in the rotor bars of induction motors during synchronous no-load tests is described in the paper. The experimental verification of the high-frequency phenomena in the rotor cage is fundamental for a deep insight of the additional loss estimation by numerical methods. The attention is mainly focused on the analysis and design of the transducers developed for the cage current measurement.

  1. Dual-Energy CT: New Horizon in Medical Imaging

    PubMed Central

    Goo, Jin Mo

    2017-01-01

    Dual-energy CT has remained underutilized over the past decade probably due to a cumbersome workflow issue and current technical limitations. Clinical radiologists should be made aware of the potential clinical benefits of dual-energy CT over single-energy CT. To accomplish this aim, the basic principle, current acquisition methods with advantages and disadvantages, and various material-specific imaging methods as clinical applications of dual-energy CT should be addressed in detail. Current dual-energy CT acquisition methods include dual tubes with or without beam filtration, rapid voltage switching, dual-layer detector, split filter technique, and sequential scanning. Dual-energy material-specific imaging methods include virtual monoenergetic or monochromatic imaging, effective atomic number map, virtual non-contrast or unenhanced imaging, virtual non-calcium imaging, iodine map, inhaled xenon map, uric acid imaging, automatic bone removal, and lung vessels analysis. In this review, we focus on dual-energy CT imaging including related issues of radiation exposure to patients, scanning and post-processing options, and potential clinical benefits mainly to improve the understanding of clinical radiologists and thus, expand the clinical use of dual-energy CT; in addition, we briefly describe the current technical limitations of dual-energy CT and the current developments of photon-counting detector. PMID:28670151

  2. Analysis of microstrip patch antennas using finite difference time domain method

    NASA Astrophysics Data System (ADS)

    Reineix, Alain; Jecko, Bernard

    1989-11-01

    The study of microstrip patch antennas is directly treated in the time domain, using a modified finite-difference time-domain (FDTD) method. Assuming an appropriate choice of excitation, the frequency dependence of the relevant parameters can readily be found using the Fourier transform of the transient current. The FDTD method allows a rigorous treatment of one or several dielectric interfaces. Different types of excitation can be taken into consideration (coaxial, microstrip lines, etc.). Plotting the spatial distribution of the current density gives information about the resonance modes. The usual frequency-depedent parameters (input impedance, radiation pattern) are given for several examples.

  3. Defining Support Requirements During Conceptual Design of Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Morris, W. D.; White, N. H.; Davis, W. T.; Ebeling, C. E.

    1995-01-01

    Current methods for defining the operational support requirements of new systems are data intensive and require significant design information. Methods are being developed to aid in the analysis process of defining support requirements for new launch vehicles during their conceptual design phase that work with the level of information available during this phase. These methods will provide support assessments based on the vehicle design and the operating scenarios. The results can be used both to define expected support requirements for new launch vehicle designs and to help evaluate the benefits of using new technologies. This paper describes the models, their current status, and provides examples of their use.

  4. Comparison of Exoelectrogenic Bacteria Detected Using Two Different Methods: U-tube Microbial Fuel Cell and Plating Method

    PubMed Central

    Yu, Jaecheul; Cho, Sunja; Kim, Sunah; Cho, Haein; Lee, Taeho

    2012-01-01

    In a microbial fuel cell (MFC), exoelectrogens, which transfer electrons to the electrode, have been regarded as a key factor for electricity generation. In this study, U-tube MFC and plating methods were used to isolate exoelectrogens from the anode of an MFC. Disparate microorganisms were identified depending on isolation methods, despite the use of an identical source. Denaturing gel gradient electrophoresis (DGGE) analysis showed that certain microorganisms became dominant in the U-tube MFC. The predominant bacterium was similar to Ochrobactrum sp., belonging to the Alphaproteobacteria, which was shown to be able to function as an exoelectrogen in a previous study. Three isolates, one affiliated with Bacillus sp. and two with Paenibacillus sp., were identified using the plating method, which belonged to the Gram-positive bacteria, the Firmicutes. The U-tube MFCs were inoculated with the three isolates using the plating method, operated in the batch mode and the current was monitored. All of the U-tube MFCs inoculated with each isolate after isolation from plates produced lower current (peak current density: 3.6–16.3 mA/m2) than those in U-tube MFCs with mixed culture (48.3–62.6 mA/m2). Although the isolates produced low currents, various bacterial groups were found to be involved in current production. PMID:22129603

  5. Analysis of Electric Vehicle DC High Current Conversion Technology

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Bai, Jing-fen; Lin, Fan-tao; Lu, Da

    2017-05-01

    Based on the background of electric vehicles, it is elaborated the necessity about electric energy accurate metering of electric vehicle power batteries, and it is analyzed about the charging and discharging characteristics of power batteries. It is needed a DC large current converter to realize accurate calibration of power batteries electric energy metering. Several kinds of measuring methods are analyzed based on shunts and magnetic induction principle in detail. It is put forward power batteries charge and discharge calibration system principle, and it is simulated and analyzed ripple waves containing rate and harmonic waves containing rate of power batteries AC side and DC side. It is put forward suitable DC large current measurement methods of power batteries by comparing different measurement principles and it is looked forward the DC large current measurement techniques.

  6. Firefighting and Emergency Response Study of Advanced Composites Aircraft. Objective 2: Firefighting Effectiveness of Technologies and Agents on Composite Aircraft Fires

    DTIC Science & Technology

    2011-12-31

    current methods used for aluminum-skinned aircraft. To this end, a series of medium-scale fire experiments were performed on aerospace composite materials...History.....................................................................................................................4 3. METHODS , ASSUMPTIONS AND...4.3. Agent Cost Analysis ..........................................................................................................21 5. CONCLUSIONS

  7. 77 FR 14814 - Tobacco Product Analysis; Scientific Workshop; Request for Comments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-13

    ... work to develop tobacco reference products that are not currently available for laboratory use. Discuss... methods used to analyze tobacco products. FDA will invite speakers to address scientific and technical matters relating to the testing of tobacco reference products and the analytical methods used to measure...

  8. The SQL Server Database for Non Computer Professional Teaching Reform

    ERIC Educational Resources Information Center

    Liu, Xiangwei

    2012-01-01

    A summary of the teaching methods of the non-computer professional SQL Server database, analyzes the current situation of the teaching course. According to non computer professional curriculum teaching characteristic, put forward some teaching reform methods, and put it into practice, improve the students' analysis ability, practice ability and…

  9. The Context Oriented Training Method.

    ERIC Educational Resources Information Center

    Cavrini, Andrea

    The Context Oriented Training (COT) method is introduced and explored in this paper. COT is a means of improving the training process, beginning with the observation and analysis of current corporate experiences in the field. The learning context lies between the development of professional competencies in training and the operational side in the…

  10. Advanced bridge safety initiative : recommended practices for live load testing of existing flat-slab concrete bridges - task 5.

    DOT National Transportation Integrated Search

    2012-12-01

    Current AASHTO provisions for load rating flat-slab concrete bridges use the equivalent strip : width method, which is regarded as overly conservative compared to more advanced analysis : methods and field live load testing. It has been shown that li...

  11. Migration monitoring with automated technology

    Treesearch

    Rhonda L. Millikin

    2005-01-01

    Automated technology can supplement ground-based methods of migration monitoring by providing: (1) unbiased and automated sampling; (2) independent validation of current methods; (3) a larger sample area for landscape-level analysis of habitat selection for stopover, and (4) an opportunity to study flight behavior. In particular, radar-acoustic sensor fusion can...

  12. Primary Geography Education in China: Past, Current and Future

    ERIC Educational Resources Information Center

    Xuan, Xiaowei; Duan, Yushan; Sun, Yue

    2015-01-01

    In China, geography education in primary schools (grades 1 to 6) has not been emphasized, although some scholars have done research in this area. In order to deepen the understanding of primary geography education in China, this paper examines its history, current situation, and future trends. The authors used the method of document analysis and…

  13. Protein arginine methylation: Cellular functions and methods of analysis.

    PubMed

    Pahlich, Steffen; Zakaryan, Rouzanna P; Gehring, Heinz

    2006-12-01

    During the last few years, new members of the growing family of protein arginine methyltransferases (PRMTs) have been identified and the role of arginine methylation in manifold cellular processes like signaling, RNA processing, transcription, and subcellular transport has been extensively investigated. In this review, we describe recent methods and findings that have yielded new insights into the cellular functions of arginine-methylated proteins, and we evaluate the currently used procedures for the detection and analysis of arginine methylation.

  14. A comparison of carbon stock estimates and projections for the northeastern United States

    Treesearch

    Richard G. MacLean; Mark J. Ducey; Coeli M. Hoover

    2014-01-01

    We conducted a comparison of carbon stock estimates produced by three different methods using regional data from the USDA Forest Service Forest Inventory and Analysis (FIA). Two methods incorporated by the Forest Vegetation Simulator (FVS) were compared to each other and to the current FIA component ratio method. We also examined the uncalibrated performance of FVS...

  15. Global flowfield about the V-22 Tiltrotor Aircraft

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1995-01-01

    The Chimera overset grid method is reviewed and discussed in the context of a method of solution and analysis of unsteady three-dimensional viscous flows. The state of maturity of the various pieces of support software required to use the approach is discussed. A variety of recent applications of the method is presented. Current limitations of the approach are identified.

  16. A Noncentral "t" Regression Model for Meta-Analysis

    ERIC Educational Resources Information Center

    Camilli, Gregory; de la Torre, Jimmy; Chiu, Chia-Yi

    2010-01-01

    In this article, three multilevel models for meta-analysis are examined. Hedges and Olkin suggested that effect sizes follow a noncentral "t" distribution and proposed several approximate methods. Raudenbush and Bryk further refined this model; however, this procedure is based on a normal approximation. In the current research literature, this…

  17. Transfer path analysis: Current practice, trade-offs and consideration of damping

    NASA Astrophysics Data System (ADS)

    Oktav, Akın; Yılmaz, Çetin; Anlaş, Günay

    2017-02-01

    Current practice of experimental transfer path analysis is discussed in the context of trade-offs between accuracy and time cost. An overview of methods, which propose solutions for structure borne noise, is given, where assumptions, drawbacks and advantages of methods are stated theoretically. Applicability of methods is also investigated, where an engine induced structure borne noise of an automobile is taken as a reference problem. Depending on this particular problem, sources of measurement errors, processing operations that affect results and physical obstacles faced in the application are analysed. While an operational measurement is common in all stated methods, when it comes to removal of source, or the need for an external excitation, discrepancies are present. Depending on the chosen method, promised outcomes like independent characterisation of the source, or getting information about mounts also differ. Although many aspects of the problem are reported in the literature, damping and its effects are not considered. Damping effect is embedded in the measured complex frequency response functions, and it is needed to be analysed in the post processing step. Effects of damping, reasons and methods to analyse them are discussed in detail. In this regard, a new procedure, which increases the accuracy of results, is also proposed.

  18. New approaches to wipe sampling methods for antineoplastic and other hazardous drugs in healthcare settings.

    PubMed

    Connor, Thomas H; Smith, Jerome P

    2016-09-01

    At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic. Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Current recommendations are that all healthc a re settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use.

  19. Optical Design And Analysis Of Carbon Dioxide Laser Fusion Systems Using Interferometry And Fast Fourier Transform Techniques

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. K.

    1980-11-01

    The optical design and analysis of the LASL carbon dioxide laser fusion systems required the use of techniques that are quite different from the currently used method in conventional optical design problems. The necessity for this is explored and the method that has been successfully used at Los Alamos to understand these systems is discussed with examples. This method involves characterization of the various optical components in their mounts by a Zernike polynomial set and using fast Fourier transform techniques to propagate the beam, taking diffraction and other nonlinear effects that occur in these types of systems into account. The various programs used for analysis are briefly discussed.

  20. Competitive intelligence and patent analysis in drug discovery.

    PubMed

    Grandjean, Nicolas; Charpiot, Brigitte; Pena, Carlos Andres; Peitsch, Manuel C

    2005-01-01

    Patents are a major source of information in drug discovery and, when properly processed and analyzed, can yield a wealth of information on competitors activities, R&D trends, emerging fields, collaborations, among others. This review discusses the current state-of-the-art in textual data analysis and exploration methods as applied to patent analysis.: © 2005 Elsevier Ltd . All rights reserved.

  1. A Review of Issues Related to Data Acquisition and Analysis in EEG/MEG Studies

    PubMed Central

    Puce, Aina; Hämäläinen, Matti S.

    2017-01-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) are non-invasive electrophysiological methods, which record electric potentials and magnetic fields due to electric currents in synchronously-active neurons. With MEG being more sensitive to neural activity from tangential currents and EEG being able to detect both radial and tangential sources, the two methods are complementary. Over the years, neurophysiological studies have changed considerably: high-density recordings are becoming de rigueur; there is interest in both spontaneous and evoked activity; and sophisticated artifact detection and removal methods are available. Improved head models for source estimation have also increased the precision of the current estimates, particularly for EEG and combined EEG/MEG. Because of their complementarity, more investigators are beginning to perform simultaneous EEG/MEG studies to gain more complete information about neural activity. Given the increase in methodological complexity in EEG/MEG, it is important to gather data that are of high quality and that are as artifact free as possible. Here, we discuss some issues in data acquisition and analysis of EEG and MEG data. Practical considerations for different types of EEG and MEG studies are also discussed. PMID:28561761

  2. Adaptive Sampling using Support Vector Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. Mandelli; C. Smith

    2012-11-01

    Reliability/safety analysis of stochastic dynamic systems (e.g., nuclear power plants, airplanes, chemical plants) is currently performed through a combination of Event-Tress and Fault-Trees. However, these conventional methods suffer from certain drawbacks: • Timing of events is not explicitly modeled • Ordering of events is preset by the analyst • The modeling of complex accident scenarios is driven by expert-judgment For these reasons, there is currently an increasing interest into the development of dynamic PRA methodologies since they can be used to address the deficiencies of conventional methods listed above.

  3. Increasing conclusiveness of clinical breath analysis by improved baseline correction of multi capillary column - ion mobility spectrometry (MCC-IMS) data.

    PubMed

    Szymańska, Ewa; Tinnevelt, Gerjen H; Brodrick, Emma; Williams, Mark; Davies, Antony N; van Manen, Henk-Jan; Buydens, Lutgarde M C

    2016-08-05

    Current challenges of clinical breath analysis include large data size and non-clinically relevant variations observed in exhaled breath measurements, which should be urgently addressed with competent scientific data tools. In this study, three different baseline correction methods are evaluated within a previously developed data size reduction strategy for multi capillary column - ion mobility spectrometry (MCC-IMS) datasets. Introduced for the first time in breath data analysis, the Top-hat method is presented as the optimum baseline correction method. A refined data size reduction strategy is employed in the analysis of a large breathomic dataset on a healthy and respiratory disease population. New insights into MCC-IMS spectra differences associated with respiratory diseases are provided, demonstrating the additional value of the refined data analysis strategy in clinical breath analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Moment method analysis of linearly tapered slot antennas

    NASA Technical Reports Server (NTRS)

    Koeksal, Adnan

    1993-01-01

    A method of moments (MOM) model for the analysis of the Linearly Tapered Slot Antenna (LTSA) is developed and implemented. The model employs an unequal size rectangular sectioning for conducting parts of the antenna. Piecewise sinusoidal basis functions are used for the expansion of conductor current. The effect of the dielectric is incorporated in the model by using equivalent volume polarization current density and solving the equivalent problem in free-space. The feed section of the antenna including the microstripline is handled rigorously in the MOM model by including slotline short circuit and microstripline currents among the unknowns. Comparison with measurements is made to demonstrate the validity of the model for both the air case and the dielectric case. Validity of the model is also verified by extending the model to handle the analysis of the skew-plate antenna and comparing the results to those of a skew-segmentation modeling results of the same structure and to available data in the literature. Variation of the radiation pattern for the air LTSA with length, height, and taper angle is investigated, and the results are tabulated. Numerical results for the effect of the dielectric thickness and permittivity are presented.

  5. Effect of interjunction coupling on superconducting current and charge correlations in intrinsic Josephson junctions

    NASA Astrophysics Data System (ADS)

    Shukrinov, Yu. M.; Hamdipour, M.; Kolahchi, M. R.

    2009-07-01

    Charge formations on superconducting layers and creation of the longitudinal plasma wave in the stack of intrinsic Josephson junctions change crucially the superconducting current through the stack. Investigation of the correlations of superconducting currents in neighboring Josephson junctions and the charge correlations in neighboring superconducting layers allows us to predict the additional features in the current-voltage characteristics. The charge autocorrelation functions clearly demonstrate the difference between harmonic and chaotic behavior in the breakpoint region. Use of the correlation functions gives us a powerful method for the analysis of the current-voltage characteristics of coupled Josephson junctions.

  6. An Excel‐based implementation of the spectral method of action potential alternans analysis

    PubMed Central

    Pearman, Charles M.

    2014-01-01

    Abstract Action potential (AP) alternans has been well established as a mechanism of arrhythmogenesis and sudden cardiac death. Proper interpretation of AP alternans requires a robust method of alternans quantification. Traditional methods of alternans analysis neglect higher order periodicities that may have greater pro‐arrhythmic potential than classical 2:1 alternans. The spectral method of alternans analysis, already widely used in the related study of microvolt T‐wave alternans, has also been used to study AP alternans. Software to meet the specific needs of AP alternans analysis is not currently available in the public domain. An AP analysis tool is implemented here, written in Visual Basic for Applications and using Microsoft Excel as a shell. This performs a sophisticated analysis of alternans behavior allowing reliable distinction of alternans from random fluctuations, quantification of alternans magnitude, and identification of which phases of the AP are most affected. In addition, the spectral method has been adapted to allow detection and quantification of higher order regular oscillations. Analysis of action potential morphology is also performed. A simple user interface enables easy import, analysis, and export of collated results. PMID:25501439

  7. Identification and Analysis of National Airspace System Resource Constraints

    NASA Technical Reports Server (NTRS)

    Smith, Jeremy C.; Marien, Ty V.; Viken, Jeffery K.; Neitzke, Kurt W.; Kwa, Tech-Seng; Dollyhigh, Samuel M.; Fenbert, James W.; Hinze, Nicolas K.

    2015-01-01

    This analysis is the deliverable for the Airspace Systems Program, Systems Analysis Integration and Evaluation Project Milestone for the Systems and Portfolio Analysis (SPA) focus area SPA.4.06 Identification and Analysis of National Airspace System (NAS) Resource Constraints and Mitigation Strategies. "Identify choke points in the current and future NAS. Choke points refer to any areas in the en route, terminal, oceanic, airport, and surface operations that constrain actual demand in current and projected future operations. Use the Common Scenarios based on Transportation Systems Analysis Model (TSAM) projections of future demand developed under SPA.4.04 Tools, Methods and Scenarios Development. Analyze causes, including operational and physical constraints." The NASA analysis is complementary to a NASA Research Announcement (NRA) "Development of Tools and Analysis to Evaluate Choke Points in the National Airspace System" Contract # NNA3AB95C awarded to Logistics Management Institute, Sept 2013.

  8. Circular current loops, magnetic dipoles and spherical harmonic analysis.

    USGS Publications Warehouse

    Alldredge, L.R.

    1980-01-01

    Spherical harmonic analysis (SHA) is the most used method of describing the Earth's magnetic field, even though spherical harmonic coefficients (SHC) almost completely defy interpretation in terms of real sources. Some moderately successful efforts have been made to represent the field in terms of dipoles placed in the core in an effort to have the model come closer to representing real sources. Dipole sources are only a first approximation to the real sources which are thought to be a very complicated network of electrical currents in the core of the Earth. -Author

  9. An improved adaptive weighting function method for State Estimation in Power Systems with VSC-MTDC

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Yang, Xiaonan; Lang, Yansheng; Song, Xuri; Wang, Minkun; Luo, Yadi; Wu, Lingyun; Liu, Peng

    2017-04-01

    This paper presents an effective approach for state estimation in power systems that include multi-terminal voltage source converter based high voltage direct current (VSC-MTDC), called improved adaptive weighting function method. The proposed approach is simplified in which the VSC-MTDC system is solved followed by the AC system. Because the new state estimation method only changes the weight and keeps the matrix dimension unchanged. Accurate and fast convergence of AC/DC system can be realized by adaptive weight function method. This method also provides the technical support for the simulation analysis and accurate regulation of AC/DC system. Both the oretical analysis and numerical tests verify practicability, validity and convergence of new method.

  10. The impact of composite AUC estimates on the prediction of systemic exposure in toxicology experiments.

    PubMed

    Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar

    2015-06-01

    Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.

  11. National survey on dose data analysis in computed tomography.

    PubMed

    Heilmaier, Christina; Treier, Reto; Merkle, Elmar Max; Alkhadi, Hatem; Weishaupt, Dominik; Schindera, Sebastian

    2018-05-28

    A nationwide survey was performed assessing current practice of dose data analysis in computed tomography (CT). All radiological departments in Switzerland were asked to participate in the on-line survey composed of 19 questions (16 multiple choice, 3 free text). It consisted of four sections: (1) general information on the department, (2) dose data analysis, (3) use of a dose management software (DMS) and (4) radiation protection activities. In total, 152 out of 241 Swiss radiological departments filled in the whole questionnaire (return rate, 63%). Seventy-nine per cent of the departments (n = 120/152) analyse dose data on a regular basis with considerable heterogeneity in the frequency (1-2 times per year, 45%, n = 54/120; every month, 35%, n = 42/120) and method of analysis. Manual analysis is carried out by 58% (n = 70/120) compared with 42% (n = 50/120) of departments using a DMS. Purchase of a DMS is planned by 43% (n = 30/70) of the departments with manual analysis. Real-time analysis of dose data is performed by 42% (n = 21/50) of the departments with a DMS; however, residents can access the DMS in clinical routine only in 20% (n = 10/50) of the departments. An interdisciplinary dose team, which among other things communicates dose data internally (63%, n = 76/120) and externally, is already implemented in 57% (n = 68/120) departments. Swiss radiological departments are committed to radiation safety. However, there is high heterogeneity among them regarding the frequency and method of dose data analysis as well as the use of DMS and radiation protection activities. • Swiss radiological departments are committed to and interest in radiation safety as proven by a 63% return rate of the survey. • Seventy-nine per cent of departments analyse dose data on a regular basis with differences in the frequency and method of analysis: 42% use a dose management software, while 58% currently perform manual dose data analysis. Of the latter, 43% plan to buy a dose management software. • Currently, only 25% of the departments add radiation exposure data to the final CT report.

  12. Failure Analysis of CCD Image Sensors Using SQUID and GMR Magnetic Current Imaging

    NASA Technical Reports Server (NTRS)

    Felt, Frederick S.

    2005-01-01

    During electrical testing of a Full Field CCD Image Senor, electrical shorts were detected on three of six devices. These failures occurred after the parts were soldered to the PCB. Failure analysis was performed to determine the cause and locations of these failures on the devices. After removing the fiber optic faceplate, optical inspection was performed on the CCDs to understand the design and package layout. Optical inspection revealed that the device had a light shield ringing the CCD array. This structure complicated the failure analysis. Alternate methods of analysis were considered, including liquid crystal, light and thermal emission, LT/A, TT/A SQUID, and MP. Of these, SQUID and MP techniques were pursued for further analysis. Also magnetoresistive current imaging technology is discussed and compared to SQUID.

  13. An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.

    PubMed

    Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E

    2017-07-01

    The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  14. Analysis on the hot spot and trend of the foreign assembly building research

    NASA Astrophysics Data System (ADS)

    Bi, Xiaoqing; Luo, Yanbing

    2017-03-01

    First of all, the paper analyzes the research on the front of the assembly building in the past 15 years. This article mainly adopts the method of CO word analysis, construct the co word matrix, correlation matrix, and then into a dissimilarity matrix, and on this basis, using factor analysis, cluster analysis and multi scale analysis method to study the structure of prefabricated construction field display. Finally, the results of the analysis are discussed, and summarized the current research focus of foreign prefabricated construction mainly concentrated in 7 aspects: embankment construction, wood construction, bridge construction, crane layout, PCM wall and glass system, based on neural network test, energy saving and recycling, and forecast the future trend of development study.

  15. Reliably detectable flaw size for NDE methods that use calibration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.

  16. Reliably Detectable Flaw Size for NDE Methods that Use Calibration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh1823 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.

  17. The U.S. Forest Service's analysis of cumulative effects to wildlife: A study of legal standards, current practice, and ongoing challenges on a National Forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, Courtney A., E-mail: courtney.schultz@colostate.edu

    Cumulative effects analysis (CEA) allows natural resource managers to understand the status of resources in historical context, learn from past management actions, and adapt future activities accordingly. U.S. federal agencies are required to complete CEA as part of environmental impact assessment under the National Environmental Policy Act (NEPA). Past research on CEA as part of NEPA has identified significant deficiencies in CEA practice, suggested methodologies for handling difficult aspects of CEA, and analyzed the rise in litigation over CEA in U.S. courts. This article provides a review of the literature and legal standards related to CEA as it is donemore » under NEPA and then examines current practice on a U.S. National Forest, utilizing qualitative methods in order to provide a detailed understanding of current approaches to CEA. Research objectives were to understand current practice, investigate ongoing challenges, and identify impediments to improvement. Methods included a systematic review of a set of NEPA documents and semi-structured interviews with practitioners, scientists, and members of the public. Findings indicate that the primary challenges associated with CEA include: issues of both geographic and temporal scale of analysis, confusion over the purpose of the requirement, the lack of monitoring data, and problems coordinating and disseminating data. Improved monitoring strategies and programmatic analyses could support improved CEA practice.« less

  18. Epidemiology of Major Depressive Disorder in Iran: a Systematic Review and Meta-Analysis

    PubMed Central

    Sadeghirad, Behnam; Haghdoost, Ali-Akbar; Amin-Esmaeili, Masoumeh; Ananloo, Esmaeil Shahsavand; Ghaeli, Padideh; Rahimi-Movaghar, Afarin; Talebian, Elham; Pourkhandani, Ali; Noorbala, Ahmad Ali; Barooti, Esmat

    2010-01-01

    Objectives: There are a large number of primary researches on the prevalence of major depressive disorder (MDD) in Iran; however, their findings are varied considerably. A systematic review was performed in order to summarize the findings. Methods: Electronic and manual searches in international and Iranian journals were conducted to find relevant studies reporting MDD prevalence. To maximize the sensitivity of the search, the references of relevant papers were also explored. We explored the potential sources of heterogeneity such as diagnostic tools, gender and other characteristics using meta-regression model. The combined mean prevalence rates were calculated for genders, studies using each type of instruments and for each province using meta-analysis method. Results: From 44 articles included in the systematic review, 24 reported current prevalence and 20 reported lifetime prevalence of MDD. The overall estimation of current prevalence of MDD was 4.1% (95% CI: 3.1-5.1). Women were 1.95 (95% CI: 1.55-2.45) times more likely to have MDD. The current prevalence of MDD in urban inhabitants was not significantly different from rural inhabitants. The analysis identified the variations in diagnostic tools as an important source of heterogeneity. Conclusions: Although there is not adequate information on MDD prevalence in some areas of Iran, the overall current prevalence of MDD in the country is high and females are at the greater risk of disease. PMID:21566767

  19. Elaboration and formalization of current scientific knowledge of risks and preventive measures illustrated by colorectal cancer.

    PubMed

    Giorgi, R; Gouvernet, J; Dufour, J; Degoulet, P; Laugier, R; Quilichini, F; Fieschi, M

    2001-01-01

    Present the method used to elaborate and formalize current scientific knowledge to provide physicians with tools available on the Internet, that enable them to evaluate individual patient risk, give personalized preventive recommendations or early screening measures. The approach suggested in this article is in line with medical procedures based on levels of evidence (Evidence-based Medicine). A cyclical process for developing recommendations allows us to quickly incorporate current scientific information. At each phase, the analysis is reevaluated by experts in the field collaborating on the project. The information is formalized through the use of levels of evidence and grades of recommendations. GLIF model is used to implement recommendations for clinical practice guidelines. The most current scientific evidence incorporated in a cyclical process includes several steps: critical analysis according to the Evidence-based Medicine method; identification of predictive factors; setting-up risk levels; identification of prevention measures; elaboration of personalized recommendation. The information technology implementation of the clinical practice guideline enables physicians to quickly obtain personalized information for their patients. Cases of colorectal prevention illustrate our approach. Integration of current scientific knowledge is an important process. The delay between the moment new information arrives and the moment the practitioner applies it, is thus reduced.

  20. Microencapsulation of Self-healing Concrete Properties

    DTIC Science & Technology

    2012-08-01

    1t does not display a currently valid OMB control number PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ORGANIZATION. 1. REPORT DATE (DD-MM-YYYY) 12...and even by forms of organic matter (Ming Qiu Zhang et al 2011). All of these methods are currently undergoing testing and analysis in order to...dimensions at resolution near 0.1 nm. Using a tunneling current applied to a probe tip that is rastered across the surface, the electrons from the

  1. Properties and behaviour of FAC currents in the inner magnetosphere

    NASA Astrophysics Data System (ADS)

    Yang, Junying; Dunlop, Malcolm; Yang, Yanyan; Xiong, Chao; Lühr, Hermann; Cao, Jinbin; Li, Liuyuan; Ma, Yuduan; Shen, Chao

    2017-04-01

    Cusp, region 1 and 2, and other large scale field-aligned currents (FACs), are sampled in situ by both the four Cluster spacecraft and by the three Swarm spacecraft at different altitudes, separated by a few to several Earth radii, and sometimes simultaneously. Here, the capability of Swarm-Cluster coordination for probing the behaviour of the field aligned currents (FACs) at medium and low orbits is explored. Joint signatures of R1 and R2 FACs (as well as cusp, R0 and NBZ currents) can be found and compared in terms of the magnetic signatures, using multi-spacecraft analysis where possible. Using the Swarm configuration, statistical correlation analysis of the local time variation of R1/R2 FACs can be shown and compared to standard MVA analysis. For context, we identify the associated auroral boundaries through application of a method to determine the FAC intensity gradients in order to interpret and resolve the R1 and R2 FACs. We also explore the relation of R2 FACs to the ring current properties measured in situ.

  2. Genomics: The Science and Technology Behind the Human Genome Project (by Charles R. Cantor and Cassandra L. Smith)

    NASA Astrophysics Data System (ADS)

    Serra, Reviewed By Martin J.

    2000-01-01

    Genomics is one of the most rapidly expanding areas of science. This book is an outgrowth of a series of lectures given by one of the former heads (CRC) of the Human Genome Initiative. The book is designed to reach a wide audience, from biologists with little chemical or physical science background through engineers, computer scientists, and physicists with little current exposure to the chemical or biological principles of genetics. The text starts with a basic review of the chemical and biological properties of DNA. However, without either a biochemistry background or a supplemental biochemistry text, this chapter and much of the rest of the text would be difficult to digest. The second chapter is designed to put DNA into the context of the larger chromosomal unit. Specialized chromosomal structures and sequences (centromeres, telomeres) are introduced, leading to a section on chromosome organization and purification. The next 4 chapters cover the physical (hybridization, electrophoresis), chemical (polymerase chain reaction), and biological (genetic) techniques that provide the backbone of genomic analysis. These chapters cover in significant detail the fundamental principles underlying each technique and provide a firm background for the remainder of the text. Chapters 7­9 consider the need and methods for the development of physical maps. Chapter 7 primarily discusses chromosomal localization techniques, including in situ hybridization, FISH, and chromosome paintings. The next two chapters focus on the development of libraries and clones. In particular, Chapter 9 considers the limitations of current mapping and clone production. The current state and future of DNA sequencing is covered in the next three chapters. The first considers the current methods of DNA sequencing - especially gel-based methods of analysis, although other possible approaches (mass spectrometry) are introduced. Much of the chapter addresses the limitations of current methods, including analysis of error in sequencing and current bottlenecks in the sequencing effort. The next chapter describes the steps necessary to scale current technologies for the sequencing of entire genomes. Chapter 12 examines alternate methods for DNA sequencing. Initially, methods of single-molecule sequencing and sequencing by microscopy are introduced; the majority of the chapter is devoted to the development of DNA sequencing methods using chip microarrays and hybridization. The remaining chapters (13-15) consider the uses and analysis of DNA sequence information. The initial focus is on the identification of genes. Several examples are given of the use of DNA sequence information for diagnosis of inherited or infectious diseases. The sequence-specific manipulation of DNA is discussed in Chapter 14. The final chapter deals with the implications of large-scale sequencing, including methods for identifying genes and finding errors in DNA sequences, to the development of computer algorithms for the interpretation of DNA sequence information. The text figures are black and white line drawings that, although clearly done, seem a bit primitive for 1999. While I appreciated the simplicity of the drawings, many students accustomed to more colorful presentations will find them wanting. The four color figures in the center of the text seem an afterthought and add little to the text's clarity. Each chapter has a set of additional reading sources, mostly primary sources. Often, specialized topics are offset into boxes that provide clarification and amplification without cluttering the text. An appendix includes a list of the Web-based database resources. As an undergraduate instructor who has previously taught biochemistry, molecular biology, and a course on the human genome, I found many interesting tidbits and amplifications throughout the text. I would recommend this book as a text for an advanced undergraduate or beginning graduate course in genomics. Although the text works though several examples of genetic and genome analysis, additional problem/homework sets would need to be developed to ensure student comprehension. The text steers clear of the ethical implications of the Human Genome Initiative and remains true to its subtitle The Science and Technology .

  3. Dynamic Analysis Method for Electromagnetic Artificial Muscle Actuator under PID Control

    NASA Astrophysics Data System (ADS)

    Nakata, Yoshihiro; Ishiguro, Hiroshi; Hirata, Katsuhiro

    We have been studying an interior permanent magnet linear actuator for an artificial muscle. This actuator mainly consists of a mover and stator. The mover is composed of permanent magnets, magnetic cores and a non-magnetic shaft. The stator is composed of 3-phase coils and a back yoke. In this paper, the dynamic analysis method under PID control is proposed employing the 3-D finite element method (3-D FEM) to compute the dynamic response and current response when the positioning control is active. As a conclusion, computed results show good agreement with measured ones of a prototype.

  4. Comparative Validation of the Determination of Sofosbuvir in Pharmaceuticals by Several Inexpensive Ecofriendly Chromatographic, Electrophoretic, and Spectrophotometric Methods.

    PubMed

    El-Yazbi, Amira F

    2017-07-01

    Sofosbuvir (SOFO) was approved by the U.S. Food and Drug Administration in 2013 for the treatment of hepatitis C virus infection with enhanced antiviral potency compared with earlier analogs. Notwithstanding, all current editions of the pharmacopeias still do not present any analytical methods for the quantification of SOFO. Thus, rapid, simple, and ecofriendly methods for the routine analysis of commercial formulations of SOFO are desirable. In this study, five accurate methods for the determination of SOFO in pharmaceutical tablets were developed and validated. These methods include HPLC, capillary zone electrophoresis, HPTLC, and UV spectrophotometric and derivative spectrometry methods. The proposed methods proved to be rapid, simple, sensitive, selective, and accurate analytical procedures that were suitable for the reliable determination of SOFO in pharmaceutical tablets. An analysis of variance test with P-value > 0.05 confirmed that there were no significant differences between the proposed assays. Thus, any of these methods can be used for the routine analysis of SOFO in commercial tablets.

  5. Analysis of Instantaneous Attractive-Normal Force and Vertical Vibration Control of Combined-Levitation-and-Propulsion SLIM Vehicle

    NASA Astrophysics Data System (ADS)

    Yoshida, Takashi

    Combined-levitation-and-propulsion single-sided linear induction motor (SLIM) vehicle can be levitated without any additional levitation system. When the vehicle runs, the attractive-normal force varies depending on the phase of primary current because of the short primary end effect. The ripple of the attractive-normal force causes the vertical vibration of the vehicle. In this paper, instantaneous attractive-normal force is analyzed by using space harmonic analysis method. And based on the analysis, vertical vibration control is proposed. The validity of the proposed control method is verified by numerical simulation.

  6. CORNAS: coverage-dependent RNA-Seq analysis of gene expression data without biological replicates.

    PubMed

    Low, Joel Z B; Khang, Tsung Fei; Tammi, Martti T

    2017-12-28

    In current statistical methods for calling differentially expressed genes in RNA-Seq experiments, the assumption is that an adjusted observed gene count represents an unknown true gene count. This adjustment usually consists of a normalization step to account for heterogeneous sample library sizes, and then the resulting normalized gene counts are used as input for parametric or non-parametric differential gene expression tests. A distribution of true gene counts, each with a different probability, can result in the same observed gene count. Importantly, sequencing coverage information is currently not explicitly incorporated into any of the statistical models used for RNA-Seq analysis. We developed a fast Bayesian method which uses the sequencing coverage information determined from the concentration of an RNA sample to estimate the posterior distribution of a true gene count. Our method has better or comparable performance compared to NOISeq and GFOLD, according to the results from simulations and experiments with real unreplicated data. We incorporated a previously unused sequencing coverage parameter into a procedure for differential gene expression analysis with RNA-Seq data. Our results suggest that our method can be used to overcome analytical bottlenecks in experiments with limited number of replicates and low sequencing coverage. The method is implemented in CORNAS (Coverage-dependent RNA-Seq), and is available at https://github.com/joel-lzb/CORNAS .

  7. Directional spatial frequency analysis of lipid distribution in atherosclerotic plaque

    NASA Astrophysics Data System (ADS)

    Korn, Clyde; Reese, Eric; Shi, Lingyan; Alfano, Robert; Russell, Stewart

    2016-04-01

    Atherosclerosis is characterized by the growth of fibrous plaques due to the retention of cholesterol and lipids within the artery wall, which can lead to vessel occlusion and cardiac events. One way to evaluate arterial disease is to quantify the amount of lipid present in these plaques, since a higher disease burden is characterized by a higher concentration of lipid. Although therapeutic stimulation of reverse cholesterol transport to reduce cholesterol deposits in plaque has not produced significant results, this may be due to current image analysis methods which use averaging techniques to calculate the total amount of lipid in the plaque without regard to spatial distribution, thereby discarding information that may have significance in marking response to therapy. Here we use Directional Fourier Spatial Frequency (DFSF) analysis to generate a characteristic spatial frequency spectrum for atherosclerotic plaques from C57 Black 6 mice both treated and untreated with a cholesterol scavenging nanoparticle. We then use the Cauchy product of these spectra to classify the images with a support vector machine (SVM). Our results indicate that treated plaque can be distinguished from untreated plaque using this method, where no difference is seen using the spatial averaging method. This work has the potential to increase the effectiveness of current in-vivo methods of plaque detection that also use averaging methods, such as laser speckle imaging and Raman spectroscopy.

  8. The Shock and Vibration Digest. Volume 1, Number 12, December 1969.

    DTIC Science & Technology

    Contents: Reviews of meetings; Short courses; Abstracts from the current literature (analysis and design methods, excitation, phenomenology, experimentation, components, systems); Book reviews; Calendar; Author index ; Subject index.

  9. Analysis strategies for longitudinal attachment loss data.

    PubMed

    Beck, J D; Elter, J R

    2000-02-01

    The purpose of this invited review is to describe and discuss methods currently in use to quantify the progression of attachment loss in epidemiological studies of periodontal disease, and to make recommendations for specific analytic methods based upon the particular design of the study and structure of the data. The review concentrates on the definition of incident attachment loss (ALOSS) and its component parts; measurement issues including thresholds and regression to the mean; methods of accounting for longitudinal change, including changes in means, changes in proportions of affected sites, incidence density, the effect of tooth loss and reversals, and repeated events; statistical models of longitudinal change, including the incorporation of the time element, use of linear, logistic or Poisson regression or survival analysis, and statistical tests; site vs person level of analysis, including statistical adjustment for correlated data; the strengths and limitations of ALOSS data. Examples from the Piedmont 65+ Dental Study are used to illustrate specific concepts. We conclude that incidence density is the preferred methodology to use for periodontal studies with more than one period of follow-up and that the use of studies not employing methods for dealing with complex samples, correlated data, and repeated measures does not take advantage of our current understanding of the site- and person-level variables important in periodontal disease and may generate biased results.

  10. Assessment of Complement Activation by Nanoparticles: Development of a SPR Based Method and Comparison with Current High Throughput Methods.

    PubMed

    Coty, Jean-Baptiste; Noiray, Magali; Vauthier, Christine

    2018-04-26

    A Surface Plasmon Resonance chip (SPR) was developed to study the activation of complement system triggered by nanomaterials in contact with human serum, which is an important concern today to warrant safety of nanomedicines. The developed chip was tested for its specificity in complex medium and its longevity of use. It was then employed to assess the release of complement fragments upon incubation of nanoparticles in serum. A comparison was made with other current methods assessing complement activation (μC-IE, ELISA). The SPR chip was found to give a consistent response for C3a release upon activation by nanoparticles. Results were similar to those obtained by μC-IE. However, ELISA detection of iC3b fragments showed an explained high non-specific background. The impact of sample preparation preceding the analysis was assessed with the newly develop SPR method. The removal of nanoparticles before analysis showed an important modification in the obtained response, possibly leading to false negative results. The SPR chip developed in this work allows for an automated assessment of complement activation triggered by nanoparticles with possibility of multiplexed analysis. The design of the chip proved to give consistent results of complement activation by nanoparticles.

  11. Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.

    PubMed

    Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué

    2018-02-15

    We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Development of the Expert System Domain Advisor and Analysis Tool

    DTIC Science & Technology

    1991-09-01

    analysis. Typical of the current methods in use at this time is the " tarot metric". This method defines a decision rule whose output is whether to go...B - TAROT METRIC B. ::TTRODUCTION The system chart of ESEM, Figure 1, shows the following three risk-based decision points: i. At prolect initiation...34 decisions. B-I 201 PRELIMINARY T" B-I. Evaluais Factan for ES Deyelopsineg FACTORS POSSIBLE VALUE RATINGS TAROT metric (overall suitability) Poor, Fair

  13. Transonic propulsion system integration analysis at McDonnell Aircraft Company

    NASA Technical Reports Server (NTRS)

    Cosner, Raymond R.

    1989-01-01

    The technology of Computational Fluid Dynamics (CFD) is becoming an important tool in the development of aircraft propulsion systems. Two of the most valuable features of CFD are: (1) quick acquisition of flow field data; and (2) complete description of flow fields, allowing detailed investigation of interactions. Current analysis methods complement wind tunnel testing in several ways. Herein, the discussion is focused on CFD methods. However, aircraft design studies need data from both CFD and wind tunnel testing. Each approach complements the other.

  14. Computer graphics and cultural heritage, part 2: continuing inspiration for future tools.

    PubMed

    Arnold, David

    2014-01-01

    The availability of large quantities of cultural-heritage data will enable new, previously inconceivable, types of analysis and new applications. Currently, most emerging analysis methods are experimental research. It's likely to take many years before the research matures and provides cultural-heritage professionals with novel research methods that they use routinely. Indeed, we can expect further disruptive technologies to emerge in the foreseeable future and a "steady state" of continuing rapid change. Part 1 can be found at 10.1109/MCG.2014.47.

  15. A rapid hydrolysis method and DABS-Cl derivatization for complete amino acid analysis of octreotide acetate by reversed phase HPLC.

    PubMed

    Akhlaghi, Yousef; Ghaffari, Solmaz; Attar, Hossein; Alamir Hoor, Amir

    2015-11-01

    Octreotide as a synthetic cyclic octapeptide is a somatostatin analog with longer half-life and more selectivity for inhibition of the growth hormone. The acetate salt of octreotide is currently used for medical treatment of somatostatin-related disorders such as endocrine and carcinoid tumors, acromegaly, and gigantism. Octreotide contains both cysteine and tryptophan residues which make the hydrolysis part of its amino acid analysis procedure very challenging. The current paper introduces a fast and additive-free method which preserves tryptophan and cysteine residues during the hydrolysis. Using only 6 M HCl, this hydrolysis process is completed in 30 min at 150 °C. This fast hydrolysis method followed by pre-column derivatization of the released amino acids with 4-N,N-dimethylaminoazobenzene-4'-sulfonyl chloride (DABS-Cl) which takes only 20 min, makes it possible to do the complete amino acid analysis of an octreotide sample in a few hours. The highly stable-colored DABS-Cl derivatives can be detected in 436 nm in a reversed phase chromatographic system, which eliminates spectral interferences to a great extent. The amino acid analysis of octreotide acetate including hydrolysis, derivatization, and reversed phase HPLC determination was validated according to International Conference of Harmonization (ICH) guidelines.

  16. Tertiary structure-based analysis of microRNA–target interactions

    PubMed Central

    Gan, Hin Hark; Gunsalus, Kristin C.

    2013-01-01

    Current computational analysis of microRNA interactions is based largely on primary and secondary structure analysis. Computationally efficient tertiary structure-based methods are needed to enable more realistic modeling of the molecular interactions underlying miRNA-mediated translational repression. We incorporate algorithms for predicting duplex RNA structures, ionic strength effects, duplex entropy and free energy, and docking of duplex–Argonaute protein complexes into a pipeline to model and predict miRNA–target duplex binding energies. To ensure modeling accuracy and computational efficiency, we use an all-atom description of RNA and a continuum description of ionic interactions using the Poisson–Boltzmann equation. Our method predicts the conformations of two constructs of Caenorhabditis elegans let-7 miRNA–target duplexes to an accuracy of ∼3.8 Å root mean square distance of their NMR structures. We also show that the computed duplex formation enthalpies, entropies, and free energies for eight miRNA–target duplexes agree with titration calorimetry data. Analysis of duplex–Argonaute docking shows that structural distortions arising from single-base-pair mismatches in the seed region influence the activity of the complex by destabilizing both duplex hybridization and its association with Argonaute. Collectively, these results demonstrate that tertiary structure-based modeling of miRNA interactions can reveal structural mechanisms not accessible with current secondary structure-based methods. PMID:23417009

  17. ICan: An Optimized Ion-Current-Based Quantification Procedure with Enhanced Quantitative Accuracy and Sensitivity in Biomarker Discovery

    PubMed Central

    2015-01-01

    The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods. PMID:25285707

  18. A Structural and Correlational Analysis of Two Common Measures of Personal Epistemology

    ERIC Educational Resources Information Center

    Laster, Bonnie Bost

    2010-01-01

    Scope and Method of Study: The current inquiry is a factor analytic study which utilizes first and second order factor analytic methods to examine the internal structures of two measurements of personal epistemological beliefs: the Schommer Epistemological Questionnaire (SEQ) and Epistemic Belief Inventory (EBI). The study also examines the…

  19. The Aims, Methods, and Effects of Deliberative Civic Education through the National Issues Forums.

    ERIC Educational Resources Information Center

    Gastil, John; Dillard, James P.

    1999-01-01

    Examines the goals, methods, and effects of four current deliberative civic education programs, with an in-depth analysis of one: the National Issues Forums (NIF). Shows that NIF can bolster participants' political self-efficacy, refine their political judgments, broaden their political conversation networks, and reduce their conversational…

  20. Calibration of collection procedures for the determination of precipitation chemistry

    Treesearch

    James N. Galloway; Gene E. Likens

    1976-01-01

    Precipitation is currently collected by several methods, including several different designs of collection apparatus. We are investigating these differing methods and designs to determine which gives the most representative sample of precipitation for the analysis of some 25 chemical parameters. The experimental site, located in Ithaca, New York, has 22 collectors of...

  1. A Mixed-Methods Analysis of Achievement Disparities in Guatemalan Primary Schools

    ERIC Educational Resources Information Center

    Meade, Ben

    2012-01-01

    Although most Guatemalan rural students currently have access to primary school, there are large differences in the levels of learning that take place among different populations and in different contexts. This paper uses multiple data and methods to examine the interrelated factors underlying achievement disparities in Guatemalan primary schools.…

  2. Finite element methods and Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Cuvelier, C.; Segal, A.; van Steenhoven, A. A.

    This book is devoted to two and three-dimensional FEM analysis of the Navier-Stokes (NS) equations describing one flow of a viscous incompressible fluid. Three different approaches to the NS equations are described: a direct method, a penalty method, and a method that constructs discrete solenoidal vector fields. Subjects of current research which are important from the industrial/technological viewpoint are considered, including capillary-free boundaries, nonisothermal flows, turbulence, and non-Newtonian fluids.

  3. Apparatus and method for controlling plating uniformity

    DOEpatents

    Hachman Jr., John T.; Kelly, James J.; West, Alan C.

    2004-10-12

    The use of an insulating shield for improving the current distribution in an electrochemical plating bath is disclosed. Numerical analysis is used to evaluate the influence of shield shape and position on plating uniformity. Simulation results are compared to experimental data for nickel deposition from a nickel--sulfamate bath. The shield is shown to improve the average current density at a plating surface.

  4. The Conceptual Landscape of iSchools: Examining Current Research Interests of Faculty Members

    ERIC Educational Resources Information Center

    Holmberg, Kim

    2013-01-01

    Introduction: This study describes the intellectual landscape of iSchools and examines how the various iSchools map on to these research areas. Method: The primary focus of the data collection process was on faculty members' current research interests as described by the individuals themselves. A co-word analysis of all iSchool faculty…

  5. The technique of entropy optimization in motor current signature analysis and its application in the fault diagnosis of gear transmission

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoguang; Liang, Lin; Liu, Fei; Xu, Guanghua; Luo, Ailing; Zhang, Sicong

    2012-05-01

    Nowadays, Motor Current Signature Analysis (MCSA) is widely used in the fault diagnosis and condition monitoring of machine tools. However, although the current signal has lower SNR (Signal Noise Ratio), it is difficult to identify the feature frequencies of machine tools from complex current spectrum that the feature frequencies are often dense and overlapping by traditional signal processing method such as FFT transformation. With the study in the Motor Current Signature Analysis (MCSA), it is found that the entropy is of importance for frequency identification, which is associated with the probability distribution of any random variable. Therefore, it plays an important role in the signal processing. In order to solve the problem that the feature frequencies are difficult to be identified, an entropy optimization technique based on motor current signal is presented in this paper for extracting the typical feature frequencies of machine tools which can effectively suppress the disturbances. Some simulated current signals were made by MATLAB, and a current signal was obtained from a complex gearbox of an iron works made in Luxembourg. In diagnosis the MCSA is combined with entropy optimization. Both simulated and experimental results show that this technique is efficient, accurate and reliable enough to extract the feature frequencies of current signal, which provides a new strategy for the fault diagnosis and the condition monitoring of machine tools.

  6. Sensitivity of BRCA1/2 testing in high-risk breast/ovarian/male breast cancer families: little contribution of comprehensive RNA/NGS panel testing.

    PubMed

    Byers, Helen; Wallis, Yvonne; van Veen, Elke M; Lalloo, Fiona; Reay, Kim; Smith, Philip; Wallace, Andrew J; Bowers, Naomi; Newman, William G; Evans, D Gareth

    2016-11-01

    The sensitivity of testing BRCA1 and BRCA2 remains unresolved as the frequency of deep intronic splicing variants has not been defined in high-risk familial breast/ovarian cancer families. This variant category is reported at significant frequency in other tumour predisposition genes, including NF1 and MSH2. We carried out comprehensive whole gene RNA analysis on 45 high-risk breast/ovary and male breast cancer families with no identified pathogenic variant on exonic sequencing and copy number analysis of BRCA1/2. In addition, we undertook variant screening of a 10-gene high/moderate risk breast/ovarian cancer panel by next-generation sequencing. DNA testing identified the causative variant in 50/56 (89%) breast/ovarian/male breast cancer families with Manchester scores of ≥50 with two variants being confirmed to affect splicing on RNA analysis. RNA sequencing of BRCA1/BRCA2 on 45 individuals from high-risk families identified no deep intronic variants and did not suggest loss of RNA expression as a cause of lost sensitivity. Panel testing in 42 samples identified a known RAD51D variant, a high-risk ATM variant in another breast ovary family and a truncating CHEK2 mutation. Current exonic sequencing and copy number analysis variant detection methods of BRCA1/2 have high sensitivity in high-risk breast/ovarian cancer families. Sequence analysis of RNA does not identify any variants undetected by current analysis of BRCA1/2. However, RNA analysis clarified the pathogenicity of variants of unknown significance detected by current methods. The low diagnostic uplift achieved through sequence analysis of the other known breast/ovarian cancer susceptibility genes indicates that further high-risk genes remain to be identified.

  7. Resilience Metrics for the Electric Power System: A Performance-Based Approach.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vugrin, Eric D.; Castillo, Andrea R; Silva-Monroy, Cesar Augusto

    Grid resilience is a concept related to a power system's ability to continue operating and delivering power even in the event that low probability, high-consequence disruptions such as hurricanes, earthquakes, and cyber-attacks occur. Grid resilience objectives focus on managing and, ideally, minimizing potential consequences that occur as a result of these disruptions. Currently, no formal grid resilience definitions, metrics, or analysis methods have been universally accepted. This document describes an effort to develop and describe grid resilience metrics and analysis methods. The metrics and methods described herein extend upon the Resilience Analysis Process (RAP) developed by Watson et al. formore » the 2015 Quadrennial Energy Review. The extension allows for both outputs from system models and for historical data to serve as the basis for creating grid resilience metrics and informing grid resilience planning and response decision-making. This document describes the grid resilience metrics and analysis methods. Demonstration of the metrics and methods is shown through a set of illustrative use cases.« less

  8. Human Schedule Performance, Protocol Analysis, and the "Silent Dog" Methodology

    ERIC Educational Resources Information Center

    Cabello, Francisco; Luciano, Carmen; Gomez, Inmaculada; Barnes-Holmes, Dermot

    2004-01-01

    The purpose of the current experiment was to investigate the role of private verbal behavior on the operant performances of human adults, using a protocol analysis procedure with additional methodological controls (the "silent dog" method). Twelve subjects were exposed to fixed ratio 8 and differential reinforcement of low rate 3-s schedules. For…

  9. Show the Data, Don't Conceal Them

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Vowler, Sarah L.

    2011-01-01

    Current standards of data presentation and analysis in biological journals often fall short of ideal. This is the first of a planned series of short articles, to be published in a number of journals, aiming to highlight the principles of clear data presentation and appropriate statistical analysis. This article considers the methods used to show…

  10. Qualitative Research in Distance Education: An Analysis of Journal Literature 2005-2012

    ERIC Educational Resources Information Center

    Hauser, Laura

    2013-01-01

    This review study examines the current research literature in distance education for the years 2005 to 2012. The author found 382 research articles published during that time in four prominent peer-reviewed research journals. The articles were classified and coded as quantitative, qualitative, or mixed methods. Further analysis found another…

  11. The Use of Cognitive Task Analysis to Capture Expertise for Tracheal Extubation Training in Anesthesiology

    ERIC Educational Resources Information Center

    Embrey, Karen K.

    2012-01-01

    Cognitive task analysis (CTA) is a knowledge elicitation technique employed for acquiring expertise from domain specialists to support the effective instruction of novices. CTA guided instruction has proven effective in improving surgical skills training for medical students and surgical residents. The standard, current method of teaching clinical…

  12. Relations among Children's Use of Dialect and Literacy Skills: A Meta-Analysis

    ERIC Educational Resources Information Center

    Gatlin, Brandy; Wanzek, Jeanne

    2015-01-01

    Purpose: The current meta-analysis examines recent empirical research studies that have investigated relations among dialect use and the development and achievement of reading, spelling, and writing skills. Method: Studies published between 1998 and 2014 were selected if they: (a) included participants who were in Grades K-6 and were typically…

  13. A Content Analysis of Nine Literacy Journals, 2009-2014

    ERIC Educational Resources Information Center

    Parsons, Seth A.; Gallagher, Melissa A.

    2016-01-01

    The purpose of this study was to determine the topics being studied, theoretical perspectives being used, and methods being implemented in current literacy research. A research team completed a content analysis of nine journals from 2009 to 2014 to gather data. In the 1,238 articles analyzed, the topics, theoretical perspectives, research designs,…

  14. An Economic Analysis of College Scholarship Policy.

    ERIC Educational Resources Information Center

    Owen, John D.

    A national scholarship policy based on a cost-benefit analysis of the social value of education is proposed as one method for improving current patterns of allocating US college scholarships and tuition funds. A central college subsidy agency, operating on a limited budget, would be required to allocate funds according to the maximum overall…

  15. Consistent Visual Analyses of Intrasubject Data

    ERIC Educational Resources Information Center

    Kahng, SungWoo; Chung, Kyong-Mee; Gutshall, Katharine; Pitts, Steven C.; Kao, Joyce; Girolami, Kelli

    2010-01-01

    Visual inspection of single-case data is the primary method of interpretation of the effects of an independent variable on a dependent variable in applied behavior analysis. The purpose of the current study was to replicate and extend the results of DeProspero and Cohen (1979) by reexamining the consistency of visual analysis across raters. We…

  16. Seismic Hazard Analysis — Quo vadis?

    NASA Astrophysics Data System (ADS)

    Klügel, Jens-Uwe

    2008-05-01

    The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies. Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making.

  17. Assessing and Valuing Historical Geospatial Data for Decisions

    NASA Astrophysics Data System (ADS)

    Sylak-Glassman, E.; Gallo, J.

    2016-12-01

    We will present a method for assessing the use and valuation of historical geospatial data and information products derived from Earth observations (EO). Historical data is widely used in the establishment of baseline reference cases, time-series analysis, and Earth system modeling. Historical geospatial data is used in diverse application areas, such as risk assessment in the insurance and reinsurance industry, disaster preparedness and response planning, historical demography, land-use change analysis, and paleoclimate research, among others. Establishing the current value of previously collected data, often from EO systems that are no longer operating, is difficult since the costs associated with their preservation, maintenance, and dissemination are current, while the costs associated with their original collection are sunk. Understanding their current use and value can aid in funding decisions about the data management infrastructure and workforce allocation required to maintain their availability. Using a value-tree framework to trace the application of data from EO systems, sensors, networks, and surveys, to weighted key Federal objectives, we are able to estimate relative contribution of individual EO systems, sensors, networks, and surveys to meeting those objectives. The analysis relies on a modified Delphi method to elicit relative levels of reliance on individual EO data inputs, including historical data, from subject matter experts. This results in the identification of a representative portfolio of all EO data used to meet key Federal objectives. Because historical data is collected in conjunction with all other EO data within a weighted framework, its contribution to meeting key Federal objectives can be specifically identified and evaluated in relationship to other EO data. The results of this method could be applied better understanding and projecting the long-term value of data from current and future EO systems.

  18. A Holarctic Biogeographical Analysis of the Collembola (Arthropoda, Hexapoda) Unravels Recent Post-Glacial Colonization Patterns

    PubMed Central

    Ávila-Jiménez, María Luisa; Coulson, Stephen James

    2011-01-01

    We aimed to describe the main Arctic biogeographical patterns of the Collembola, and analyze historical factors and current climatic regimes determining Arctic collembolan species distribution. Furthermore, we aimed to identify possible dispersal routes, colonization sources and glacial refugia for Arctic collembola. We implemented a Gaussian Mixture Clustering method on species distribution ranges and applied a distance- based parametric bootstrap test on presence-absence collembolan species distribution data. Additionally, multivariate analysis was performed considering species distributions, biodiversity, cluster distribution and environmental factors (temperature and precipitation). No clear relation was found between current climatic regimes and species distribution in the Arctic. Gaussian Mixture Clustering found common elements within Siberian areas, Atlantic areas, the Canadian Arctic, a mid-Siberian cluster and specific Beringian elements, following the same pattern previously described, using a variety of molecular methods, for Arctic plants. Species distribution hence indicate the influence of recent glacial history, as LGM glacial refugia (mid-Siberia, and Beringia) and major dispersal routes to high Arctic island groups can be identified. Endemic species are found in the high Arctic, but no specific biogeographical pattern can be clearly identified as a sign of high Arctic glacial refugia. Ocean currents patterns are suggested as being an important factor shaping the distribution of Arctic Collembola, which is consistent with Antarctic studies in collembolan biogeography. The clear relations between cluster distribution and geographical areas considering their recent glacial history, lack of relationship of species distribution with current climatic regimes, and consistency with previously described Arctic patterns in a series of organisms inferred using a variety of methods, suggest that historical phenomena shaping contemporary collembolan distribution can be inferred through biogeographical analysis. PMID:26467728

  19. A Steel Ball Surface Quality Inspection Method Based on a Circumferential Eddy Current Array Sensor.

    PubMed

    Zhang, Huayu; Xie, Fengqin; Cao, Maoyong; Zhong, Mingming

    2017-07-01

    To efficiently inspect surface defects on steel ball bearings, a new method based on a circumferential eddy current array (CECA) sensor was proposed here. The best probe configuration, in terms of the coil quality factor (Q-factor), magnetic field intensity, and induced eddy current density on the surface of a sample steel ball, was determined using 3-, 4-, 5-, and 6-coil probes, for analysis and comparison. The optimal lift-off from the measured steel ball, the number of probe coils, and the frequency of excitation current suitable for steel ball inspection were obtained. Using the resulting CECA sensor to inspect 46,126 steel balls showed a miss rate of ~0.02%. The sensor was inspected for surface defects as small as 0.05 mm in width and 0.1 mm in depth.

  20. Investigation of Thermophysical Parameters Properties for Enhancing Overpressure Mechanism Estimation. Case Study: Miri Area, West Baram Delta

    NASA Astrophysics Data System (ADS)

    Adha, Kurniawan; Yusoff, Wan Ismail Wan; Almanna Lubis, Luluan

    2017-10-01

    Determining the pore pressure data and overpressure zone is a compulsory part of oil and gas exploration in which the data can enhance the safety with profit and preventing the drilling hazards. Investigation of thermophysical parameters such as temperature and thermal conductivity can enhance the pore pressure estimation for overpressure mechanism determination. Since those parameters are dependent on rock properties, it may reflect the changes on the column of thermophysical parameters when there is abnormally in pore pressure. The study was conducted in “MRI 1” well offshore Sarawak, where a new approach method designed to determine the overpressure generation. The study was insisted the contribution of thermophysical parameters for supporting the velocity analysis method, petrophysical analysis were done in these studies. Four thermal facies were identified along the well. The overpressure developed below the thermal facies 4, where the pressure reached 38 Mpa and temperature was increasing significantly. The velocity and the thermal conductivity cross plots shows a linear relationship since the both parameters mainly are the function of the rock compaction. When the rock more compact, the particles were brought closer into contact and making the sound wave going faster while the thermal conductivity were increasing. In addition, the increment of temperature and high heat flow indicated the presence of fluid expansion mechanism. Since the shale sonic velocity and density analysis were the common methods in overpressure mechanism and pore pressure estimation. As the addition parameters for determining overpressure zone, the presence of thermophysical analysis was enhancing the current method, where the current method was the single function of velocity analysis. The presence of thermophysical analysis will improve the understanding in overpressure mechanism determination as the new input parameters. Thus, integrated of thermophysical technique and velocity analysis are important parameters in investigating the overpressure mechanisms and pore pressure estimation during oil and gas exploitation in the future.

  1. Phospholipid Fatty Acid Analysis: Past, Present and Future

    NASA Astrophysics Data System (ADS)

    Findlay, R. H.

    2008-12-01

    With their 1980 publication, Bobbie and White initiated the use of phospholipid fatty acids for the study of microbial communities. This method, integrated with a previously published biomass assay based on the colorimetric detection of orthophosphate liberated from phospholipids, provided the first quantitative method for determining microbial community structure. The method is based on a quantitative extraction of lipids from the sample matrix, isolation of the phospholipids, conversion of the phospholipid fatty acids to their corresponding fatty acid methyl esters (known by the acronym FAME) and the separation, identification and quantification of the FAME by gas chromatography. Early laboratory and field samples focused on correlating individual fatty acids to particular groups of microorganisms. Subsequent improvements to the methodology include reduced solvent volumes for extractions, improved sensitivity in the detection of orthophosphate and the use of solid phase extraction technology. Improvements in the field of gas chromatography also increased accessibility of the technique and it has been widely applied to water, sediment, soil and aerosol samples. Whole cell fatty acid analysis, a related but not equal technique, is currently used for phenotypic characterization in bacterial species descriptions and is the basis for a commercial, rapid bacterial identification system. In the early 1990ês application of multivariate statistical analysis, first cluster analysis and then principal component analysis, further improved the usefulness of the technique and allowed the development of a functional group approach to interpretation of phospholipid fatty acid profiles. Statistical techniques currently applied to the analysis of phospholipid fatty acid profiles include constrained ordinations and neutral networks. Using redundancy analysis, a form of constrained ordination, we have recently shown that both cation concentration and dissolved organic matter (DOM) quality are determinates of microbial community structure in forested headwater streams. One of the most exciting recent developments in phospholipid fatty acid analysis is the application of compound specific stable isotope analysis. We are currently applying this technique to stream sediments to help determine which microorganisms are involved in the initial processing of DOM and the technique promises to be a useful tool for assigning ecological function to microbial populations.

  2. Mechanistic flexible pavement overlay design program.

    DOT National Transportation Integrated Search

    2009-07-01

    The current Louisiana Department of Transportation and Development (LADOTD) overlay thickness design method follows the Component : Analysis procedure provided in the 1993 AASHTO pavement design guide. Since neither field nor laboratory tests a...

  3. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.

  4. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    PubMed

    Patlak, J B

    1993-07-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance histogram technique provided a more credible analysis of the open, closed, and subconductance times for the patch. I also show that the method produces accurate results on simulated data in a wide variety of conditions, whereas the half-amplitude method, when applied to complex simulated data shows the same errors as were apparent in the real data. The utility and the limitations of this new method are discussed.

  5. Analysis of vibration waveforms of electromechanical response to determine piezoelectric and electrostrictive coefficients.

    PubMed

    Izumi, Tatsuya; Hagiwara, Manabu; Hoshina, Takuya; Takeda, Hiroaki; Tsurumi, Takaaki

    2012-08-01

    We developed a possible method to determine both coefficients of piezoelectricity (d) and electrostriction (M) at the same time by a waveform analysis of current and vibration velocity in the resonance state. The waveforms of the current and vibration velocity were theoretically described using the equations of motion and piezoelectric constitutive equations, considering the dissipation effect. The dissipation factor of the d coefficient and M coefficient is dielectric loss tangent tan δ. The waveforms measured in all of the ceramics, such as Pb(Zr,Ti)O(3) (PZT), Pb(Mg,Nb)O(3) (PMN), and 0.8Pb(Mg(1/3)Nb2/3)O(3)-0.2PbTiO(3) (PMN-PT), were well fitted with the calculated waveform. This fitting produced both the d and M coefficients, which agreed with those determined via the conventional methods. Moreover, the respective contributions of both piezoelectricity and electrostriction to the d value determined in the resonance-antiresonance method were clarified.

  6. Advanced aircraft service life monitoring method via flight-by-flight load spectra

    NASA Astrophysics Data System (ADS)

    Lee, Hongchul

    This research is an effort to understand current method and to propose an advanced method for Damage Tolerance Analysis (DTA) for the purpose of monitoring the aircraft service life. As one of tasks in the DTA, the current indirect Individual Aircraft Tracking (IAT) method for the F-16C/D Block 32 does not properly represent changes in flight usage severity affecting structural fatigue life. Therefore, an advanced aircraft service life monitoring method based on flight-by-flight load spectra is proposed and recommended for IAT program to track consumed fatigue life as an alternative to the current method which is based on the crack severity index (CSI) value. Damage Tolerance is one of aircraft design philosophies to ensure that aging aircrafts satisfy structural reliability in terms of fatigue failures throughout their service periods. IAT program, one of the most important tasks of DTA, is able to track potential structural crack growth at critical areas in the major airframe structural components of individual aircraft. The F-16C/D aircraft is equipped with a flight data recorder to monitor flight usage and provide the data to support structural load analysis. However, limited memory of flight data recorder allows user to monitor individual aircraft fatigue usage in terms of only the vertical inertia (NzW) data for calculating Crack Severity Index (CSI) value which defines the relative maneuver severity. Current IAT method for the F-16C/D Block 32 based on CSI value calculated from NzW is shown to be not accurate enough to monitor individual aircraft fatigue usage due to several problems. The proposed advanced aircraft service life monitoring method based on flight-by-flight load spectra is recommended as an improved method for the F-16C/D Block 32 aircraft. Flight-by-flight load spectra was generated from downloaded Crash Survival Flight Data Recorder (CSFDR) data by calculating loads for each time hack in selected flight data utilizing loads equations. From the comparison of interpolated fatigue life using CSI value and fatigue test results, it is obvious that proposed advanced IAT method via flight-by-flight load spectra is more reliable and accurate than current IAT method. Therefore, the advanced aircraft service life monitoring method based on flight-by-flight load spectra not only monitors the individual aircraft consumed fatigue life for inspection but also ensures the structural reliability of aging aircrafts throughout their service periods.

  7. Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks

    PubMed Central

    2014-01-01

    Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226

  8. Semi-Supervised Marginal Fisher Analysis for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Huang, H.; Liu, J.; Pan, Y.

    2012-07-01

    The problem of learning with both labeled and unlabeled examples arises frequently in Hyperspectral image (HSI) classification. While marginal Fisher analysis is a supervised method, which cannot be directly applied for Semi-supervised classification. In this paper, we proposed a novel method, called semi-supervised marginal Fisher analysis (SSMFA), to process HSI of natural scenes, which uses a combination of semi-supervised learning and manifold learning. In SSMFA, a new difference-based optimization objective function with unlabeled samples has been designed. SSMFA preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution, and it can be computed based on eigen decomposition. Classification experiments with a challenging HSI task demonstrate that this method outperforms current state-of-the-art HSI-classification methods.

  9. An Analysis of Measured Pressure Signatures From Two Theory-Validation Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    2003-01-01

    Two wing/fuselage/nacelle/fin concepts were designed to check the validity and the applicability of sonic-boom minimization theory, sonic-boom analysis methods, and low-boom design methodology in use at the end of the 1980is. Models of these concepts were built, and the pressure signatures they generated were measured in the wind-tunnel. The results of these measurements lead to three conclusions: (1) the existing methods could adequately predict sonic-boom characteristics of wing/fuselage/fin(s) configurations if the equivalent area distributions of each component were smooth and continuous; (2) these methods needed revision so the engine-nacelle volume and the nacelle-wing interference lift disturbances could be accurately predicted; and (3) current nacelle-configuration integration methods had to be updated. With these changes in place, the existing sonic-boom analysis and minimization methods could be effectively applied to supersonic-cruise concepts for acceptable/tolerable sonic-boom overpressures during cruise.

  10. Probabilistic Structural Analysis Theory Development

    NASA Technical Reports Server (NTRS)

    Burnside, O. H.

    1985-01-01

    The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.

  11. Gaussian Elimination-Based Novel Canonical Correlation Analysis Method for EEG Motion Artifact Removal.

    PubMed

    Roy, Vandana; Shukla, Shailja; Shukla, Piyush Kumar; Rawat, Paresh

    2017-01-01

    The motion generated at the capturing time of electro-encephalography (EEG) signal leads to the artifacts, which may reduce the quality of obtained information. Existing artifact removal methods use canonical correlation analysis (CCA) for removing artifacts along with ensemble empirical mode decomposition (EEMD) and wavelet transform (WT). A new approach is proposed to further analyse and improve the filtering performance and reduce the filter computation time under highly noisy environment. This new approach of CCA is based on Gaussian elimination method which is used for calculating the correlation coefficients using backslash operation and is designed for EEG signal motion artifact removal. Gaussian elimination is used for solving linear equation to calculate Eigen values which reduces the computation cost of the CCA method. This novel proposed method is tested against currently available artifact removal techniques using EEMD-CCA and wavelet transform. The performance is tested on synthetic and real EEG signal data. The proposed artifact removal technique is evaluated using efficiency matrices such as del signal to noise ratio (DSNR), lambda ( λ ), root mean square error (RMSE), elapsed time, and ROC parameters. The results indicate suitablity of the proposed algorithm for use as a supplement to algorithms currently in use.

  12. New method for determining central axial orientation of flux rope embedded within current sheet using multipoint measurements

    NASA Astrophysics Data System (ADS)

    Li, ZhaoYu; Chen, Tao; Yan, GuangQing

    2016-10-01

    A new method for determining the central axial orientation of a two-dimensional coherent magnetic flux rope (MFR) via multipoint analysis of the magnetic-field structure is developed. The method is devised under the following geometrical assumptions: (1) on its cross section, the structure is left-right symmetric; (2) the projected structure velocity is vertical to the line of symmetry. The two conditions can be naturally satisfied for cylindrical MFRs and are expected to be satisfied for MFRs that are flattened within current sheets. The model test demonstrates that, for determining the axial orientation of such structures, the new method is more efficient and reliable than traditional techniques such as minimum-variance analysis of the magnetic field, Grad-Shafranov (GS) reconstruction, and the more recent method based on the cylindrically symmetric assumption. A total of five flux transfer events observed by Cluster are studied using the proposed approach, and the application results indicate that the observed structures, regardless of their actual physical properties, fit the assumed geometrical model well. For these events, the inferred axial orientations are all in excellent agreement with those obtained using the multi-GS reconstruction technique.

  13. Rigorous analysis of thick microstrip antennas and wire antennas embedded in a substrate

    NASA Astrophysics Data System (ADS)

    Smolders, A. B.

    1992-07-01

    An efficient and rigorous method for the analysis of electrically thick rectangular microstrip antennas and wire antennas with a dielectric cover is presented. The method of moments is used in combination with the exact spectral domain Green's function in order to find the unknown currents on the antenna. The microstrip antenna is fed by a coaxial cable. A proper model of the feeding coaxial structure is used. In addition, a special attachment mode was applied to ensure continuity of current at the patch-coax transition. The efficiency of the method of moments is improved by using the so called source term extraction technique, where a great part of the infinite integrals involved with the method of moment formulation is calculated analytically. Computation time can be saved by selecting a set of basis functions that describes the current distribution on the patch and probe in an accurate way using only a few terms of this set. Thick microstrip antennas have broadband characteristics. However, a proper match to 50 Ohms is often difficult. This matching problem can be avoided by using a slightly different excitation structure. The patch is now electromagnetically coupled to the feeding probe. A bandwidth of more than 40 can easily be obtained for this type of microstrip antenna. The price to be paid is a degradation of the radiation characteristics.

  14. Clinical perspective of cell-free DNA testing for fetal aneuploidies.

    PubMed

    Gratacós, Eduard; Nicolaides, Kypros

    2014-01-01

    Cell-free DNA testing in maternal blood provides the most effective method of screening for trisomy 21, with a reported detection rate of 99% and a false positive rate of less than 0.1%. After many years of research, this method is now commercially available and is carried out in an increasing number of patients, and there is an expanding number of conditions that can be screened for. However, the application of these methods in clinical practice requires a careful analysis. Current first-trimester screening strategies are based on a complex combination of tests, aiming at detecting fetal defects and predicting the risk of main pregnancy complications. It is therefore necessary to define the optimal way of combining cell-free DNA testing with current first-trimester screening methods. In this concise review we describe the basis of cell-free DNA testing and discuss the potential approaches for its implementation in combination with current tests in the first trimester. © 2014 S. Karger AG, Basel.

  15. Detecting electroporation by assessing the time constants in the exponential response of human skin to voltage controlled impulse electrical stimulation.

    PubMed

    Bîrlea, Sinziana I; Corley, Gavin J; Bîrlea, Nicolae M; Breen, Paul P; Quondamatteo, Fabio; OLaighin, Gearóid

    2009-01-01

    We propose a new method for extracting the electrical properties of human skin based on the time constant analysis of its exponential response to impulse stimulation. As a result of this analysis an adjacent finding has arisen. We have found that stratum corneum electroporation can be detected using this analysis method. We have observed that a one time-constant model is appropriate for describing the electrical properties of human skin at low amplitude applied voltages (<30V), and a two time-constant model best describes skin electrical properties at higher amplitude applied voltages (>30V). Higher voltage amplitudes (>30V) have been proven to create pores in the skin's stratum corneum which offer a new, lower resistance, pathway for the passage of current through the skin. Our data shows that when pores are formed in the stratum corneum they can be detected, in-vivo, due to the fact that a second time constant describes current flow through them.

  16. Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.

    2013-01-01

    There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable

  17. Evaluating disease management program effectiveness: an introduction to survival analysis.

    PubMed

    Linden, Ariel; Adams, John L; Roberts, Nancy

    2004-01-01

    Currently, the most widely used method in the disease management industry for evaluating program effectiveness is the "total population approach." This model is a pretest-posttest design, with the most basic limitation being that without a control group, there may be sources of bias and/or competing extraneous confounding factors that offer plausible rationale explaining the change from baseline. Survival analysis allows for the inclusion of data from censored cases, those subjects who either "survived" the program without experiencing the event (e.g., achievement of target clinical levels, hospitalization) or left the program prematurely, due to disenrollement from the health plan or program, or were lost to follow-up. Additionally, independent variables may be included in the model to help explain the variability in the outcome measure. In order to maximize the potential of this statistical method, validity of the model and research design must be assured. This paper reviews survival analysis as an alternative, and more appropriate, approach to evaluating DM program effectiveness than the current total population approach.

  18. Handwriting Examination: Moving from Art to Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarman, K.H.; Hanlen, R.C.; Manzolillo, P.A.

    In this document, we present a method for validating the premises and methodology of forensic handwriting examination. This method is intuitively appealing because it relies on quantitative measurements currently used qualitatively by FDE's in making comparisons, and it is scientifically rigorous because it exploits the power of multivariate statistical analysis. This approach uses measures of both central tendency and variation to construct a profile for a given individual. (Central tendency and variation are important for characterizing an individual's writing and both are currently used by FDE's in comparative analyses). Once constructed, different profiles are then compared for individuality using clustermore » analysis; they are grouped so that profiles within a group cannot be differentiated from one another based on the measured characteristics, whereas profiles between groups can. The cluster analysis procedure used here exploits the power of multivariate hypothesis testing. The result is not only a profile grouping but also an indication of statistical significance of the groups generated.« less

  19. Multiple Frequency Audio Signal Communication as a Mechanism for Neurophysiology and Video Data Synchronization

    PubMed Central

    Topper, Nicholas C.; Burke, S.N.; Maurer, A.P.

    2014-01-01

    BACKGROUND Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. NEW METHOD A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. RESULTS The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. COMPARISONS WITH EXISTING METHOD Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. CONCLUSIONS While On-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. PMID:25256648

  20. A feasibility study of altered spatial distribution of losses induced by eddy currents in body composition analysis

    PubMed Central

    2010-01-01

    Background Tomographic imaging has revealed that the body mass index does not give a reliable state of overall fitness. However, high measurement costs make the tomographic imaging unsuitable for large scale studies or repeated individual use. This paper reports an experimental investigation of a new electromagnetic method and its feasibility for assessing body composition. The method is called body electrical loss analysis (BELA). Methods The BELA method uses a high-Q parallel resonant circuit to produce a time-varying magnetic field. The Q of the resonator changes when the sample is placed in its coil. This is caused by induced eddy currents in the sample. The new idea in the BELA method is the altered spatial distribution of the electrical losses generated by these currents. The distribution of losses is varied using different excitation frequencies. The feasibility of the method was tested using simplified phantoms. Two of these phantoms were rough estimations of human torso. One had fat in the middle of its volume and saline solution in the outer shell volume. The other had reversed conductivity distributions. The phantoms were placed in the resonator and the change in the losses was measured. Five different excitation frequencies from 100 kHz to 200 kHz were used. Results The rate of loss as a function of frequency was observed to be approximately three times larger for a phantom with fat in the middle of its volume than for one with fat in its outer shell volume. Conclusions At higher frequencies the major signal contribution can be shifted toward outer shell volume. This enables probing the conductivity distribution of the subject by weighting outer structural components. The authors expect that the loss changing rate over frequency can be a potential index for body composition analysis. PMID:21047441

  1. Processor farming in two-level analysis of historical bridge

    NASA Astrophysics Data System (ADS)

    Krejčí, T.; Kruis, J.; Koudelka, T.; Šejnoha, M.

    2017-11-01

    This contribution presents a processor farming method in connection with a multi-scale analysis. In this method, each macro-scopic integration point or each finite element is connected with a certain meso-scopic problem represented by an appropriate representative volume element (RVE). The solution of a meso-scale problem provides then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel computing because the meso-scale problems can be distributed among many processors. The application of the processor farming method to a real world masonry structure is illustrated by an analysis of Charles bridge in Prague. The three-dimensional numerical model simulates the coupled heat and moisture transfer of one half of arch No. 3. and it is a part of a complex hygro-thermo-mechanical analysis which has been developed to determine the influence of climatic loading on the current state of the bridge.

  2. Why Map Issues? On Controversy Analysis as a Digital Method

    PubMed Central

    2015-01-01

    This article takes stock of recent efforts to implement controversy analysis as a digital method in the study of science, technology, and society (STS) and beyond and outlines a distinctive approach to address the problem of digital bias. Digital media technologies exert significant influence on the enactment of controversy in online settings, and this risks undermining the substantive focus of controversy analysis conducted by digital means. To address this problem, I propose a shift in thematic focus from controversy analysis to issue mapping. The article begins by distinguishing between three broad frameworks that currently guide the development of controversy analysis as a digital method, namely, demarcationist, discursive, and empiricist. Each has been adopted in STS, but only the last one offers a digital “move beyond impartiality.” I demonstrate this approach by analyzing issues of Internet governance with the aid of the social media platform Twitter. PMID:26336325

  3. 37Cl/35Cl isotope ratio analysis in perchlorate by ion chromatography/multi collector -ICPMS: Analytical performance and implication for biodegradation studies.

    PubMed

    Zakon, Yevgeni; Ronen, Zeev; Halicz, Ludwik; Gelman, Faina

    2017-10-01

    In the present study we propose a new analytical method for 37 Cl/ 35 Cl analysis in perchlorate by Ion Chromatography(IC) coupled to Multicollector Inductively Coupled Plasma Mass Spectrometry (MC-ICPMS). The accuracy of the analytical method was validated by analysis of international perchlorate standard materials USGS-37 and USGS -38; analytical precision better than ±0.4‰ was achieved. 37 Cl/ 35 Cl isotope ratio analysis in perchlorate during laboratory biodegradation experiment with microbial cultures enriched from the contaminated soil in Israel resulted in isotope enrichment factor ε 37 Cl = -13.3 ± 1‰, which falls in the range reported previously for perchlorate biodegradation by pure microbial cultures. The proposed analytical method may significantly simplify the procedure for isotope analysis of perchlorate which is currently applied in environmental studies. Copyright © 2017. Published by Elsevier Ltd.

  4. High performance computing enabling exhaustive analysis of higher order single nucleotide polymorphism interaction in Genome Wide Association Studies.

    PubMed

    Goudey, Benjamin; Abedini, Mani; Hopper, John L; Inouye, Michael; Makalic, Enes; Schmidt, Daniel F; Wagner, John; Zhou, Zeyu; Zobel, Justin; Reumann, Matthias

    2015-01-01

    Genome-wide association studies (GWAS) are a common approach for systematic discovery of single nucleotide polymorphisms (SNPs) which are associated with a given disease. Univariate analysis approaches commonly employed may miss important SNP associations that only appear through multivariate analysis in complex diseases. However, multivariate SNP analysis is currently limited by its inherent computational complexity. In this work, we present a computational framework that harnesses supercomputers. Based on our results, we estimate a three-way interaction analysis on 1.1 million SNP GWAS data requiring over 5.8 years on the full "Avoca" IBM Blue Gene/Q installation at the Victorian Life Sciences Computation Initiative. This is hundreds of times faster than estimates for other CPU based methods and four times faster than runtimes estimated for GPU methods, indicating how the improvement in the level of hardware applied to interaction analysis may alter the types of analysis that can be performed. Furthermore, the same analysis would take under 3 months on the currently largest IBM Blue Gene/Q supercomputer "Sequoia" at the Lawrence Livermore National Laboratory assuming linear scaling is maintained as our results suggest. Given that the implementation used in this study can be further optimised, this runtime means it is becoming feasible to carry out exhaustive analysis of higher order interaction studies on large modern GWAS.

  5. Determination of plasma displacement based on eddy current diagnostics for the Keda Torus eXperiment

    NASA Astrophysics Data System (ADS)

    Tu, Cui; Li, Hong; Liu, Adi; Li, Zichao; Zhang, Yuan; You, Wei; Tan, Mingsheng; Luo, Bing; Adil, Yolbarsop; Hu, Jintong; Wu, Yanqi; Yan, Wentan; Xie, Jinlin; Lan, Tao; Mao, Wenzhe; Ding, Weixing; Xiao, Chijin; Zhuang, Ge; Liu, Wandong

    2017-10-01

    The measurement of plasma displacement is one of the most basic diagnostic tools in the study of plasma equilibrium and control in a toroidal magnetic confinement configuration. During pulse discharge, the eddy current induced in the vacuum vessel and shell will produce an additional magnetic field at the plasma boundary, which will have a significant impact on the measurement of plasma displacement using magnetic probes. In the newly built Keda Torus eXperiment (KTX) reversed field pinch device, the eddy current in the composite shell can be obtained at a high spatial resolution. This device offers a new way to determine the plasma displacement for KTX through the multipole moment expansion of the eddy current, which can be obtained by unique probe arrays installed on the inner and outer surfaces of the composite shell. In an ideal conductor shell approximation, the method of multipole moment expansion of the poloidal eddy current for measuring the plasma displacement in toroidal coordinates, is more accurate than the previous method based on symmetrical magnetic probes, which yielded results in cylindrical coordinates. Through an analytical analysis of many current filaments and numerical simulations of the current distribution in toroidal coordinates, the scaling relation between the first moment of the eddy current and the center of gravity of the plasma current is obtained. In addition, the origin of the multipole moment expansion of the eddy current in KTX is retrieved simultaneously. Preliminary data on the plasma displacement have been collected using these two methods during short pulse discharges in the KTX device, and the results of the two methods are in reasonable agreement.

  6. Backscatter analysis of dihedral corner reflectors using physical optics and the physical theory of diffraction

    NASA Technical Reports Server (NTRS)

    Griesser, Timothy; Balanis, Constantine A.

    1987-01-01

    The backscatter cross-sections of dihedral corner reflectors in the azimuthal plane are presently determined by both physical optics (PO) and the physical theory of diffraction (PTD), yielding results for the vertical and horizontal polarizations. In the first analysis method used, geometrical optics is used in place of PO at initial reflections in order to maintain the planar character of the reflected wave and reduce the complexity of the analysis. In the second method, PO is used at almost every reflection in order to maximize the accuracy of the PTD solution at the expense of a rapid increase in complexity. Induced surface current densities and resulting cross section patterns are illustrated for the two methods.

  7. Acoustics based assessment of respiratory diseases using GMM classification.

    PubMed

    Mayorga, P; Druzgalski, C; Morelos, R L; Gonzalez, O H; Vidales, J

    2010-01-01

    The focus of this paper is to present a method utilizing lung sounds for a quantitative assessment of patient health as it relates to respiratory disorders. In order to accomplish this, applicable traditional techniques within the speech processing domain were utilized to evaluate lung sounds obtained with a digital stethoscope. Traditional methods utilized in the evaluation of asthma involve auscultation and spirometry, but utilization of more sensitive electronic stethoscopes, which are currently available, and application of quantitative signal analysis methods offer opportunities of improved diagnosis. In particular we propose an acoustic evaluation methodology based on the Gaussian Mixed Models (GMM) which should assist in broader analysis, identification, and diagnosis of asthma based on the frequency domain analysis of wheezing and crackles.

  8. Use of the Maximum Likelihood Method in the Analysis of Chamber Air Dives

    DTIC Science & Technology

    1988-01-01

    the total gas pressure in compartment i, P0 is the current ambient pressure, 0 [ and A and B are constants (0.0026 min-’ -ATA- and 8.31 ATA...computer model (4), the Kidd- Stubbs 1971 decompression tables (11), and the current Defence and Civil Institute 20 of Environmental Medicine (DCIEM...it could be applied. Since the models are not suitable for this test, then within T ese no-deco current limits of statistical theory, the results can

  9. Evaluation of the Kinetic Property of Single-Molecule Junctions by Tunneling Current Measurements.

    PubMed

    Harashima, Takanori; Hasegawa, Yusuke; Kiguchi, Manabu; Nishino, Tomoaki

    2018-01-01

    We investigated the formation and breaking of single-molecule junctions of two kinds of dithiol molecules by time-resolved tunneling current measurements in a metal nanogap. The resulting current trajectory was statistically analyzed to determine the single-molecule conductance and, more importantly, to reveal the kinetic property of the single-molecular junction. These results suggested that combining a measurement of the single-molecule conductance and statistical analysis is a promising method to uncover the kinetic properties of the single-molecule junction.

  10. The effect of conductor permeability on electric current transducers

    NASA Astrophysics Data System (ADS)

    Mirzaei, M.; Ripka, P.; Chirtsov, A.; Kaspar, P.; Vyhnanek, J.

    2018-04-01

    In this paper, experimental works and theoretical analysis are presented to analyze the influence of the conductor permeability on the precision of yokeless current sensors. The results of finite-element method (FEM) fit well the measured field values around the conductor. Finally we evaluate the difference in magnetic fields distribution around non-magnetic and magnetic conductor. The calculated values show that the permeability of the ferromagnetic conductor significally affects the reading of the electric current sensors even at DC.

  11. Development and Single-Laboratory Validation of a Liquid Chromatography Tandem Mass Spectrometry Method for Quantitation of Tetrodotoxin in Mussels and Oysters.

    PubMed

    Turner, Andrew D; Boundy, Michael J; Rapkova, Monika Dhanji

    2017-09-01

    In recent years, evidence has grown for the presence of tetrodotoxin (TTX) in bivalve mollusks, leading to the potential for consumers of contaminated products to be affected by Tetrodotoxin Shellfish Poisoning (TSP). A single-laboratory validation was conducted for the hydrophilic interaction LC (HILIC) tandem MS (MS/MS) analysis of TTX in common mussels and Pacific oysters-the bivalve species that have been found to contain TTXs in the United Kingdom in recent years. The method consists of a single-step dispersive extraction in 1% acetic acid, followed by a carbon SPE cleanup step before dilution and instrumental analysis. The full method was developed as a rapid tool for the quantitation of TTX, as well as for the associated analogs 4-epi-TTX; 5,6,11-trideoxy TTX; 11-nor TTX-6-ol; 5-deoxy TTX; and 4,9-anhydro TTX. The method can also be run as the acquisition of TTX together with paralytic shellfish toxins. Results demonstrated acceptable method performance characteristics for specificity, linearity, recovery, ruggedness, repeatability, matrix variability, and within-laboratory reproducibility for the analysis of TTX. The LOD and LOQ were fit-for-purpose in comparison to the current action limit for TTX enforced in The Netherlands. In addition, aspects of method performance (LOD, LOQ, and within-laboratory reproducibility) were found to be satisfactory for three other TTX analogs (11-nor TTX-6-ol, 5-deoxy TTX, and 4,9-anhydro TTX). The method was found to be practical and suitable for use in regulatory testing, providing rapid turnaround of sample analysis. Plans currently underway on a full collaborative study to validate a HILIC-MS/MS method for paralytic shellfish poisoning toxins will be extended to include TTX in order to generate international acceptance, ultimately for use as an alternative official control testing method should regulatory controls be adopted.

  12. Building Energy Monitoring and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Feng, Wei; Lu, Alison

    This project aimed to develop a standard methodology for building energy data definition, collection, presentation, and analysis; apply the developed methods to a standardized energy monitoring platform, including hardware and software, to collect and analyze building energy use data; and compile offline statistical data and online real-time data in both countries for fully understanding the current status of building energy use. This helps decode the driving forces behind the discrepancy of building energy use between the two countries; identify gaps and deficiencies of current building energy monitoring, data collection, and analysis; and create knowledge and tools to collect and analyzemore » good building energy data to provide valuable and actionable information for key stakeholders.« less

  13. Modeling Canadian Quality Control Test Program for Steroid Hormone Receptors in Breast Cancer: Diagnostic Accuracy Study.

    PubMed

    Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan

    The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.

  14. A computer analysis of reflex eyelid motion in normal subjects and in facial neuropathy.

    PubMed

    Somia, N N; Rash, G S; Epstein, E E; Wachowiak, M; Sundine, M J; Stremel, R W; Barker, J H; Gossman, D

    2000-12-01

    To demonstrate how computerized eyelid motion analysis can quantify the human reflex blink. Seventeen normal subjects and 10 patients with unilateral facial nerve paralysis were analyzed. Eyelid closure is currently evaluated by systems primarily designed to assess lower/midfacial movements. The methods are subjective, difficult to reproduce, and measure only volitional closure. Reflex closure is responsible for eye hydration, and its evaluation demands dynamic analysis. A 60Hz video camera incorporated into a helmet was used to analyze blinking. Reflective markers on the forehead and eyelids allowed for the dynamic measurement of the reflex blink. Eyelid displacement, velocity and acceleration were calculated. The degree of synchrony between bilateral blinks was also determined. This study demonstrates that video motion analysis can describe normal and altered eyelid motions in a quantifiable manner. To our knowledge, this is the first study to measure dynamic reflex blinks. Eyelid closure may now be evaluated in kinematic terms. This technique could increase understanding of eyelid motion and permit more accurate evaluation of eyelid function. Dynamic eyelid evaluation has immediate applications in the treatment of facial palsy affecting the reflex blink. Relevance No method has been developed that objectively quantifies dynamic eyelid closure. Methods currently in use evaluate only volitional eyelid closure, and are based on direct and indirect observer assessments. These methods are subjective and are incapable of analyzing dynamic eyelid movements, which are critical to maintenance of corneal hydration and comfort. A system that quantifies eyelid kinematics can provide a functional analysis of blink disorders and an objective evaluation of their treatment(s).

  15. Estimation of Lightning Levels on a Launcher Using a BEM-Compressed Model

    NASA Astrophysics Data System (ADS)

    Silly, J.; Chaigne, B.; Aspas-Puertolas, J.; Herlem, Y.

    2016-05-01

    As development cycles in the space industry are being considerably reduced, it seems mandatory to deploy in parallel fast analysis methods for engineering purposes, but without sacrificing accuracy. In this paper we present the application of such methods to early Phase A-B [1] evaluation of lightning constraints on a launch vehicle.A complete 3D parametric model of a launcher has been thus developed and simulated with a Boundary Element Method (BEM)-frequency simulator (equipped with a low frequency algorithm). The time domain values of the observed currents and fields are obtained by post-treatment using an inverse discrete Fourier transform (IDFT).This model is used for lightning studies, especially the simulation are useful to analyse the influence of lightning injected currents on resulting circulated currents on external cable raceways. The description of the model and some of those results are presented in this article.

  16. Online Detection of Broken Rotor Bar Fault in Induction Motors by Combining Estimation of Signal Parameters via Min-norm Algorithm and Least Square Method

    NASA Astrophysics Data System (ADS)

    Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin

    2017-11-01

    Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.

  17. Biomass accessibility analysis using electron tomography

    DOE PAGES

    Hinkle, Jacob D.; Ciesielski, Peter N.; Gruchalla, Kenny; ...

    2015-12-25

    Substrate accessibility to catalysts has been a dominant theme in theories of biomass deconstruction. Furthermore, current methods of quantifying accessibility do not elucidate mechanisms for increased accessibility due to changes in microstructure following pretreatment.

  18. Screening Workers: An Examination and Analysis of Practice and Public Policy.

    ERIC Educational Resources Information Center

    Greenfield, Patricia A.; And Others

    1989-01-01

    Discusses methods of screening job applicants and issues raised by screening procedures.. Includes legal ramifications, current practices in Britain and the United States, future directions, and the employment interview. (JOW)

  19. Safe Passage Data Analysis: Interim Report

    DOT National Transportation Integrated Search

    1993-04-01

    The purpose of this report is to describe quantitatively the costs and benefits of screener : proficiency evaluation and reporting systems (SPEARS) equipment, particularly computer-based : instruction (CBI) systems, compared to current methods of tra...

  20. Detection of S-Nitrosothiols

    PubMed Central

    Diers, Anne R.; Keszler, Agnes; Hogg, Neil

    2015-01-01

    BACKGROUND S-Nitrosothiols have been recognized as biologically-relevant products of nitric oxide that are involved in many of the diverse activities of this free radical. SCOPE OF REVIEW This review serves to discuss current methods for the detection and analysis of protein S-nitrosothiols. The major methods of S-nitrosothiol detection include chemiluminescence-based methods and switch-based methods, each of which comes in various flavors with advantages and caveats. MAJOR CONCLUSIONS The detection of S-nitrosothiols is challenging and prone to many artifacts. Accurate measurements require an understanding of the underlying chemistry of the methods involved and the use of appropriate controls. GENERAL SIGNIFICANCE Nothing is more important to a field of research than robust methodology that is generally trusted. The field of S-Nitrosation has developed such methods but, as S-nitrosothiols are easy to introduce as artifacts, it is vital that current users learn from the lessons of the past. PMID:23988402

  1. COMETS2: An advanced MATLAB toolbox for the numerical analysis of electric fields generated by transcranial direct current stimulation.

    PubMed

    Lee, Chany; Jung, Young-Jin; Lee, Sang Jun; Im, Chang-Hwan

    2017-02-01

    Since there is no way to measure electric current generated by transcranial direct current stimulation (tDCS) inside the human head through in vivo experiments, numerical analysis based on the finite element method has been widely used to estimate the electric field inside the head. In 2013, we released a MATLAB toolbox named COMETS, which has been used by a number of groups and has helped researchers to gain insight into the electric field distribution during stimulation. The aim of this study was to develop an advanced MATLAB toolbox, named COMETS2, for the numerical analysis of the electric field generated by tDCS. COMETS2 can generate any sizes of rectangular pad electrodes on any positions on the scalp surface. To reduce the large computational burden when repeatedly testing multiple electrode locations and sizes, a new technique to decompose the global stiffness matrix was proposed. As examples of potential applications, we observed the effects of sizes and displacements of electrodes on the results of electric field analysis. The proposed mesh decomposition method significantly enhanced the overall computational efficiency. We implemented an automatic electrode modeler for the first time, and proposed a new technique to enhance the computational efficiency. In this paper, an efficient toolbox for tDCS analysis is introduced (freely available at http://www.cometstool.com). It is expected that COMETS2 will be a useful toolbox for researchers who want to benefit from the numerical analysis of electric fields generated by tDCS. Copyright © 2016. Published by Elsevier B.V.

  2. A online credit evaluation method based on AHP and SPA

    NASA Astrophysics Data System (ADS)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  3. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  4. Numerical study of read scheme in one-selector one-resistor crossbar array

    NASA Astrophysics Data System (ADS)

    Kim, Sungho; Kim, Hee-Dong; Choi, Sung-Jin

    2015-12-01

    A comprehensive numerical circuit analysis of read schemes of a one selector-one resistance change memory (1S1R) crossbar array is carried out. Three schemes-the ground, V/2, and V/3 schemes-are compared with each other in terms of sensing margin and power consumption. Without the aid of a complex analytical approach or SPICE-based simulation, a simple numerical iteration method is developed to simulate entire current flows and node voltages within a crossbar array. Understanding such phenomena is essential in successfully evaluating the electrical specifications of selectors for suppressing intrinsic drawbacks of crossbar arrays, such as sneaky current paths and series line resistance problems. This method provides a quantitative tool for the accurate analysis of crossbar arrays and provides guidelines for developing an optimal read scheme, array configuration, and selector device specifications.

  5. A survey of MRI-based medical image analysis for brain tumor studies

    NASA Astrophysics Data System (ADS)

    Bauer, Stefan; Wiest, Roland; Nolte, Lutz-P.; Reyes, Mauricio

    2013-07-01

    MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.

  6. Display format, highlight validity, and highlight method: Their effects on search performance

    NASA Technical Reports Server (NTRS)

    Donner, Kimberly A.; Mckay, Tim D.; Obrien, Kevin M.; Rudisill, Marianne

    1991-01-01

    Display format and highlight validity were shown to affect visual display search performance; however, these studies were conducted on small, artificial displays of alphanumeric stimuli. A study manipulating these variables was conducted using realistic, complex Space Shuttle information displays. A 2x2x3 within-subjects analysis of variance found that search times were faster for items in reformatted displays than for current displays. Responses to valid applications of highlight were significantly faster than responses to non or invalidly highlighted applications. The significant format by highlight validity interaction showed that there was little difference in response time to both current and reformatted displays when the highlight validity was applied; however, under the non or invalid highlight conditions, search times were faster with reformatted displays. A separate within-subject analysis of variance of display format, highlight validity, and several highlight methods did not reveal a main effect of highlight method. In addition, observed display search times were compared to search time predicted by Tullis' Display Analysis Program. Benefits of highlighting and reformatting displays to enhance search and the necessity to consider highlight validity and format characteristics in tandem for predicting search performance are discussed.

  7. Genome-scale stoichiometry analysis to elucidate the innate capability of the cyanobacterium Synechocystis for electricity generation.

    PubMed

    Mao, Longfei; Verwoerd, Wynand S

    2013-10-01

    Synechocystis sp. PCC 6803 has been considered as a promising biocatalyst for electricity generation in recent microbial fuel cell research. However, the innate maximum current production potential and underlying metabolic pathways supporting the high current output are still unknown. This is mainly due to the fact that the high-current production cell phenotype results from the interaction among hundreds of reactions in the metabolism and it is impossible for reductionist methods to characterize the pathway selection in such a metabolic state. In this study, we employed computational metabolic techniques, flux balance analysis, and flux variability analysis, to exploit the maximum current outputs of Synechocystis sp. PCC 6803, in five electron transfer cases, namely, ferredoxin- and plastoquinol-dependent electron transfers under photoautotrophic cultivation, and NADH-dependent mediated electron transfer under photoautotrophic, heterotrophic, and mixotrophic conditions. In these five modes, the maximum current outputs were computed as 0.198, 0.7918, 0.198, 0.4652, and 0.4424 A gDW⁻¹, respectively. Comparison of the five operational modes suggests that plastoquinol-/c-type cytochrome-targeted electricity generation had an advantage of liberating the highest current output achievable for Synechocystis sp. PCC 6803. On the other hand, the analysis indicates that the currency metabolite, NADH-, dependent electricity generation can rely on a number of reactions from different pathways, and is thus more robust against environmental perturbations.

  8. Identification of Logic Relationships between Genes and Subtypes of Non-Small Cell Lung Cancer

    PubMed Central

    Su, Yansen; Pan, Linqiang

    2014-01-01

    Non-small cell lung cancer (NSCLC) has two major subtypes: adenocarcinoma (AC) and squamous cell carcinoma (SCC). The diagnosis and treatment of NSCLC are hindered by the limited knowledge about the pathogenesis mechanisms of subtypes of NSCLC. It is necessary to research the molecular mechanisms related with AC and SCC. In this work, we improved the logic analysis algorithm to mine the sufficient and necessary conditions for the presence states (presence or absence) of phenotypes. We applied our method to AC and SCC specimens, and identified lower and higher logic relationships between genes and two subtypes of NSCLC. The discovered relationships were independent of specimens selected, and their significance was validated by statistic test. Compared with the two earlier methods (the non-negative matrix factorization method and the relevance analysis method), the current method outperformed these methods in the recall rate and classification accuracy on NSCLC and normal specimens. We obtained biomarkers. Among biomarkers, genes have been used to distinguish AC from SCC in practice, and other six genes were newly discovered biomarkers for distinguishing subtypes. Furthermore, NKX2-1 has been considered as a molecular target for the targeted therapy of AC, and other genes may be novel molecular targets. By gene ontology analysis, we found that two biological processes (‘epidermis development’ and ‘cell adhesion’) were closely related with the tumorigenesis of subtypes of NSCLC. More generally, the current method could be extended to other complex diseases for distinguishing subtypes and detecting the molecular targets for targeted therapy. PMID:24743794

  9. Battery Capacity Fading Estimation Using a Force-Based Incremental Capacity Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samad, Nassim A.; Kim, Youngki; Siegel, Jason B.

    Traditionally health monitoring techniques in lithium-ion batteries rely on voltage and current measurements. A novel method of using a mechanical rather than electrical signal in the incremental capacity analysis (ICA) method is introduced in this paper. This method derives the incremental capacity curves based onmeasured force (ICF) instead of voltage (ICV). The force ismeasured on the surface of a cell under compression in a fixture that replicates a battery pack assembly and preloading. The analysis is performed on data collected from cycling encased prismatic Lithium-ion Nickel-Manganese-Cobalt Oxide (NMC) cells. For the NMC chemistry, the ICF method can complement or replacemore » the ICV method for the following reasons. The identified ICV peaks are centered around 40% of state of charge (SOC) while the peaks of the ICF method are centered around 70% of SOC indicating that the ICF can be used more often because it is more likely that an electric vehicle (EV) or a plug-in hybrid electric vehicle (PHEV) will traverse the 70% SOC range than the 40% SOC. In addition the Signal to Noise ratio (SNR) of the force signal is four times larger than the voltage signal using laboratory grade sensors. The proposed ICF method is shown to achieve 0.42% accuracy in capacity estimation during a low C-rate constant current discharge. Future work will investigate the application of the capacity estimation technique under charging and operation under high C-rates by addressing the transient behavior of force so that an online methodology for capacity estimation is developed.« less

  10. Battery Capacity Fading Estimation Using a Force-Based Incremental Capacity Analysis

    DOE PAGES

    Samad, Nassim A.; Kim, Youngki; Siegel, Jason B.; ...

    2016-05-27

    Traditionally health monitoring techniques in lithium-ion batteries rely on voltage and current measurements. A novel method of using a mechanical rather than electrical signal in the incremental capacity analysis (ICA) method is introduced in this paper. This method derives the incremental capacity curves based onmeasured force (ICF) instead of voltage (ICV). The force ismeasured on the surface of a cell under compression in a fixture that replicates a battery pack assembly and preloading. The analysis is performed on data collected from cycling encased prismatic Lithium-ion Nickel-Manganese-Cobalt Oxide (NMC) cells. For the NMC chemistry, the ICF method can complement or replacemore » the ICV method for the following reasons. The identified ICV peaks are centered around 40% of state of charge (SOC) while the peaks of the ICF method are centered around 70% of SOC indicating that the ICF can be used more often because it is more likely that an electric vehicle (EV) or a plug-in hybrid electric vehicle (PHEV) will traverse the 70% SOC range than the 40% SOC. In addition the Signal to Noise ratio (SNR) of the force signal is four times larger than the voltage signal using laboratory grade sensors. The proposed ICF method is shown to achieve 0.42% accuracy in capacity estimation during a low C-rate constant current discharge. Future work will investigate the application of the capacity estimation technique under charging and operation under high C-rates by addressing the transient behavior of force so that an online methodology for capacity estimation is developed.« less

  11. Measuring molecular biomarkers in epidemiologic studies: laboratory techniques and biospecimen considerations.

    PubMed

    Erickson, Heidi S

    2012-09-28

    The future of personalized medicine depends on the ability to efficiently and rapidly elucidate a reliable set of disease-specific molecular biomarkers. High-throughput molecular biomarker analysis methods have been developed to identify disease risk, diagnostic, prognostic, and therapeutic targets in human clinical samples. Currently, high throughput screening allows us to analyze thousands of markers from one sample or one marker from thousands of samples and will eventually allow us to analyze thousands of markers from thousands of samples. Unfortunately, the inherent nature of current high throughput methodologies, clinical specimens, and cost of analysis is often prohibitive for extensive high throughput biomarker analysis. This review summarizes the current state of high throughput biomarker screening of clinical specimens applicable to genetic epidemiology and longitudinal population-based studies with a focus on considerations related to biospecimens, laboratory techniques, and sample pooling. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Advancing Usability Evaluation through Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; David I. Gertman

    2005-07-01

    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probabilitymore » of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.« less

  13. Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.

  14. On simulation of local fluxes in molecular junctions

    NASA Astrophysics Data System (ADS)

    Cabra, Gabriel; Jensen, Anders; Galperin, Michael

    2018-05-01

    We present a pedagogical review of the current density simulation in molecular junction models indicating its advantages and deficiencies in analysis of local junction transport characteristics. In particular, we argue that current density is a universal tool which provides more information than traditionally simulated bond currents, especially when discussing inelastic processes. However, current density simulations are sensitive to the choice of basis and electronic structure method. We note that while discussing the local current conservation in junctions, one has to account for the source term caused by the open character of the system and intra-molecular interactions. Our considerations are illustrated with numerical simulations of a benzenedithiol molecular junction.

  15. Development and verification of local/global analysis techniques for laminated composites

    NASA Technical Reports Server (NTRS)

    Griffin, O. Hayden, Jr.

    1989-01-01

    Analysis and design methods for laminated composite materials have been the subject of considerable research over the past 20 years, and are currently well developed. In performing the detailed three-dimensional analyses which are often required in proximity to discontinuities, however, analysts often encounter difficulties due to large models. Even with the current availability of powerful computers, models which are too large to run, either from a resource or time standpoint, are often required. There are several approaches which can permit such analyses, including substructuring, use of superelements or transition elements, and the global/local approach. This effort is based on the so-called zoom technique to global/local analysis, where a global analysis is run, with the results of that analysis applied to a smaller region as boundary conditions, in as many iterations as is required to attain an analysis of the desired region. Before beginning the global/local analyses, it was necessary to evaluate the accuracy of the three-dimensional elements currently implemented in the Computational Structural Mechanics (CSM) Testbed. It was also desired to install, using the Experimental Element Capability, a number of displacement formulation elements which have well known behavior when used for analysis of laminated composites.

  16. Determination of Mercury in Aqueous and Geologic Materials by Continuous Flow-Cold Vapor-Atomic Fluorescence Spectrometry (CVAFS)

    USGS Publications Warehouse

    Hageman, Philip L.

    2007-01-01

    New methods for the determination of total mercury in geologic materials and dissolved mercury in aqueous samples have been developed that will replace the methods currently (2006) in use. The new methods eliminate the use of sodium dichromate (Na2Cr2O7 ?2H2O) as an oxidizer and preservative and significantly lower the detection limit for geologic and aqueous samples. The new methods also update instrumentation from the traditional use of cold vapor-atomic absorption spectrometry to cold vapor-atomic fluorescence spectrometry. At the same time, the new digestion procedures for geologic materials use the same size test tubes, and the same aluminum heating block and hot plate as required by the current methods. New procedures for collecting and processing of aqueous samples use the same procedures that are currently (2006) in use except that the samples are now preserved with concentrated hydrochloric acid/bromine monochloride instead of sodium dichromate/nitric acid. Both the 'old' and new methods have the same analyst productivity rates. These similarities should permit easy migration to the new methods. Analysis of geologic and aqueous reference standards using the new methods show that these procedures provide mercury recoveries that are as good as or better than the previously used methods.

  17. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1982-01-01

    The feasibility of modeling magnetic fields due to certain electrical currents flowing in the Earth's ionosphere and magnetosphere was investigated. A method was devised to carry out forward modeling of the magnetic perturbations that arise from space currents. The procedure utilizes a linear current element representation of the distributed electrical currents. The finite thickness elements are combined into loops which are in turn combined into cells having their base in the ionosphere. In addition to the extensive field modeling, additional software was developed for the reduction and analysis of the MAGSAT data in terms of the external current effects. Direct comparisons between the models and the MAGSAT data are possible.

  18. Automating Frame Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Franklin, Lyndsey; Tratz, Stephen C.

    2008-04-01

    Frame Analysis has come to play an increasingly stronger role in the study of social movements in Sociology and Political Science. While significant steps have been made in providing a theory of frames and framing, a systematic characterization of the frame concept is still largely lacking and there are no rec-ognized criteria and methods that can be used to identify and marshal frame evi-dence reliably and in a time and cost effective manner. Consequently, current Frame Analysis work is still too reliant on manual annotation and subjective inter-pretation. The goal of this paper is to present an approach to themore » representation, acquisition and analysis of frame evidence which leverages Content Analysis, In-formation Extraction and Semantic Search methods to provide a systematic treat-ment of a Frame Analysis and automate frame annotation.« less

  19. [Applications of meta-analysis in multi-omics].

    PubMed

    Han, Mingfei; Zhu, Yunping

    2014-07-01

    As a statistical method integrating multi-features and multi-data, meta-analysis was introduced to the field of life science in the 1990s. With the rapid advances in high-throughput technologies, life omics, the core of which are genomics, transcriptomics and proteomics, is becoming the new hot spot of life science. Although the fast output of massive data has promoted the development of omics study, it results in excessive data that are difficult to integrate systematically. In this case, meta-analysis is frequently applied to analyze different types of data and is improved continuously. Here, we first summarize the representative meta-analysis methods systematically, and then study the current applications of meta-analysis in various omics fields, finally we discuss the still-existing problems and the future development of meta-analysis.

  20. A brief overview on radon measurements in drinking water.

    PubMed

    Jobbágy, Viktor; Altzitzoglou, Timotheos; Malo, Petya; Tanner, Vesa; Hult, Mikael

    2017-07-01

    The aim of this paper is to present information about currently used standard and routine methods for radon analysis in drinking waters. An overview is given about the current situation and the performance of different measurement methods based on literature data. The following parameters are compared and discussed: initial sample volume and sample preparation, detection systems, minimum detectable activity, counting efficiency, interferences, measurement uncertainty, sample capacity and overall turnaround time. Moreover, the parametric levels for radon in drinking water from the different legislations and directives/guidelines on radon are presented. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Calibration Designs for Non-Monolithic Wind Tunnel Force Balances

    NASA Technical Reports Server (NTRS)

    Johnson, Thomas H.; Parker, Peter A.; Landman, Drew

    2010-01-01

    This research paper investigates current experimental designs and regression models for calibrating internal wind tunnel force balances of non-monolithic design. Such calibration methods are necessary for this class of balance because it has an electrical response that is dependent upon the sign of the applied forces and moments. This dependency gives rise to discontinuities in the response surfaces that are not easily modeled using traditional response surface methodologies. An analysis of current recommended calibration models is shown to lead to correlated response model terms. Alternative modeling methods are explored which feature orthogonal or near-orthogonal terms.

  2. DLocalMotif: a discriminative approach for discovering local motifs in protein sequences.

    PubMed

    Mehdi, Ahmed M; Sehgal, Muhammad Shoaib B; Kobe, Bostjan; Bailey, Timothy L; Bodén, Mikael

    2013-01-01

    Local motifs are patterns of DNA or protein sequences that occur within a sequence interval relative to a biologically defined anchor or landmark. Current protein motif discovery methods do not adequately consider such constraints to identify biologically significant motifs that are only weakly over-represented but spatially confined. Using negatives, i.e. sequences known to not contain a local motif, can further increase the specificity of their discovery. This article introduces the method DLocalMotif that makes use of positional information and negative data for local motif discovery in protein sequences. DLocalMotif combines three scoring functions, measuring degrees of motif over-representation, entropy and spatial confinement, specifically designed to discriminatively exploit the availability of negative data. The method is shown to outperform current methods that use only a subset of these motif characteristics. We apply the method to several biological datasets. The analysis of peroxisomal targeting signals uncovers several novel motifs that occur immediately upstream of the dominant peroxisomal targeting signal-1 signal. The analysis of proline-tyrosine nuclear localization signals uncovers multiple novel motifs that overlap with C2H2 zinc finger domains. We also evaluate the method on classical nuclear localization signals and endoplasmic reticulum retention signals and find that DLocalMotif successfully recovers biologically relevant sequence properties. http://bioinf.scmb.uq.edu.au/dlocalmotif/

  3. A two dimensional power spectral estimate for some nonstationary processes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Smith, Gregory L.

    1989-01-01

    A two dimensional estimate for the power spectral density of a nonstationary process is being developed. The estimate will be applied to helicopter noise data which is clearly nonstationary. The acoustic pressure from the isolated main rotor and isolated tail rotor is known to be periodically correlated (PC) and the combined noise from the main and tail rotors is assumed to be correlation autoregressive (CAR). The results of this nonstationary analysis will be compared with the current method of assuming that the data is stationary and analyzing it as such. Another method of analysis is to introduce a random phase shift into the data as shown by Papoulis to produce a time history which can then be accurately modeled as stationary. This method will also be investigated for the helicopter data. A method used to determine the period of a PC process when the period is not know is discussed. The period of a PC process must be known in order to produce an accurate spectral representation for the process. The spectral estimate is developed. The bias and variability of the estimate are also discussed. Finally, the current method for analyzing nonstationary data is compared to that of using a two dimensional spectral representation. In addition, the method of phase shifting the data is examined.

  4. An Excel-based implementation of the spectral method of action potential alternans analysis.

    PubMed

    Pearman, Charles M

    2014-12-01

    Action potential (AP) alternans has been well established as a mechanism of arrhythmogenesis and sudden cardiac death. Proper interpretation of AP alternans requires a robust method of alternans quantification. Traditional methods of alternans analysis neglect higher order periodicities that may have greater pro-arrhythmic potential than classical 2:1 alternans. The spectral method of alternans analysis, already widely used in the related study of microvolt T-wave alternans, has also been used to study AP alternans. Software to meet the specific needs of AP alternans analysis is not currently available in the public domain. An AP analysis tool is implemented here, written in Visual Basic for Applications and using Microsoft Excel as a shell. This performs a sophisticated analysis of alternans behavior allowing reliable distinction of alternans from random fluctuations, quantification of alternans magnitude, and identification of which phases of the AP are most affected. In addition, the spectral method has been adapted to allow detection and quantification of higher order regular oscillations. Analysis of action potential morphology is also performed. A simple user interface enables easy import, analysis, and export of collated results. © 2014 The Author. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.

  5. Compression After Impact Experiments and Analysis on Honeycomb Core Sandwich Panels with Thin Facesheets

    NASA Technical Reports Server (NTRS)

    McQuigg, Thomas D.

    2011-01-01

    A better understanding of the effect of impact damage on composite structures is necessary to give the engineer an ability to design safe, efficient structures. Current composite structures suffer severe strength reduction under compressive loading conditions, due to even light damage, such as from low velocity impact. A review is undertaken to access the current state-of-development in the areas of experimental testing, and analysis methods. A set of experiments on honeycomb core sandwich panels, with thin woven fiberglass cloth facesheets, is described, which includes detailed instrumentation and unique observation techniques.

  6. Note: Eddy current displacement sensors independent of target conductivity.

    PubMed

    Wang, Hongbo; Li, Wei; Feng, Zhihua

    2015-01-01

    Eddy current sensors (ECSs) are widely used for non-contact displacement measurement. In this note, the quantitative error of an ECS caused by target conductivity was analyzed using a complex image method. The response curves (L-x) of the ECS with different targets were similar and could be overlapped by shifting the curves on x direction with √2δ/2. Both finite element analysis and experiments match well with the theoretical analysis, which indicates that the measured error of high precision ECSs caused by target conductivity can be completely eliminated, and the ECSs can measure different materials precisely without calibration.

  7. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  8. iTemplate: A template-based eye movement data analysis approach.

    PubMed

    Xiao, Naiqi G; Lee, Kang

    2018-02-08

    Current eye movement data analysis methods rely on defining areas of interest (AOIs). Due to the fact that AOIs are created and modified manually, variances in their size, shape, and location are unavoidable. These variances affect not only the consistency of the AOI definitions, but also the validity of the eye movement analyses based on the AOIs. To reduce the variances in AOI creation and modification and achieve a procedure to process eye movement data with high precision and efficiency, we propose a template-based eye movement data analysis method. Using a linear transformation algorithm, this method registers the eye movement data from each individual stimulus to a template. Thus, users only need to create one set of AOIs for the template in order to analyze eye movement data, rather than creating a unique set of AOIs for all individual stimuli. This change greatly reduces the error caused by the variance from manually created AOIs and boosts the efficiency of the data analysis. Furthermore, this method can help researchers prepare eye movement data for some advanced analysis approaches, such as iMap. We have developed software (iTemplate) with a graphic user interface to make this analysis method available to researchers.

  9. Solar quiet day ionospheric source current in the West African region.

    PubMed

    Obiekezie, Theresa N; Okeke, Francisca N

    2013-05-01

    The Solar Quiet (Sq) day source current were calculated using the magnetic data obtained from a chain of 10 magnetotelluric stations installed in the African sector during the French participation in the International Equatorial Electrojet Year (IEEY) experiment in Africa. The components of geomagnetic field recorded at the stations from January-December in 1993 during the experiment were separated into the source and (induced) components of Sq using Spherical Harmonics Analysis (SHA) method. The range of the source current was calculated and this enabled the viewing of a full year's change in the source current system of Sq.

  10. Validation of Skills, Knowledge and Experience in Lifelong Learning in Europe

    ERIC Educational Resources Information Center

    Ogunleye, James

    2012-01-01

    The paper examines systems of validation of skills and experience as well as the main methods/tools currently used for validating skills and knowledge in lifelong learning. The paper uses mixed methods--a case study research and content analysis of European Union policy documents and frameworks--as a basis for this research. The selection of the…

  11. The Effect of Creative Drama as a Method on Skills: A Meta-Analysis Study

    ERIC Educational Resources Information Center

    Ulubey, Özgür

    2018-01-01

    The aim of the current study was to synthesize the findings of experimental studies addressing the effect of the creative drama method on the skills of students. Research data were derived from ProQuest Citations, Web of Science, Google Academic, National Thesis Center, EBSCO, ERIC, Taylor & Francis Online, and ScienceDirect databases using…

  12. Comparison of Information Dissemination Methods in Inle Lake: A Lesson for Reconsidering Framework for Environmental Education Strategies

    ERIC Educational Resources Information Center

    Oo, Htun Naing; Sutheerawatthana, Pitch; Minato, Takayuki

    2010-01-01

    This article analyzes the practice of information dissemination regarding pesticide usage in floating gardening in a rural area. The analysis reveals reasons why the current information dissemination methods employed by relevant stakeholders do not work. It then puts forward a proposition that information sharing within organizations of and among…

  13. A two-phase method for timber supply analysis

    Treesearch

    Stephen Smith

    1978-01-01

    There is an increasing need to clarify the long-term wood supply implications of current harvesting rates. To assess the wood supply and to set timber production objectives, different linear programming techniques are applied to the short and long term. The transportation method is applied to the short term and the B. C. Forest Service computer-assisted resource...

  14. IMPINGER SOLUTIONS FOR THE EFFICIENT CAPTURE OF GASEOUS MERCURY SPECIES USING DIRECT INJECTION NEBULIZATION INDUCTIVELY COUPLED PLASMA MASS SPECTROMETRY (DIN-ICP/MS) ANALYSIS

    EPA Science Inventory

    Currently there are no EPA reference sampling mehtods that have been promulgated for measuring Hg from coal combustion sources. EPA Method 29 is most commonly applied. The ASTM Ontario Hydro Draft Method for measuring oxidized, elemental, particulate-bound and total Hg is now und...

  15. Grid related issues for static and dynamic geometry problems using systems of overset structured grids

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1995-01-01

    Grid related issues of the Chimera overset grid method are discussed in the context of a method of solution and analysis of unsteady three-dimensional viscous flows. The state of maturity of the various pieces of support software required to use the approach is considered. Current limitations of the approach are identified.

  16. Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis

    ERIC Educational Resources Information Center

    Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P. T.; Langan, Dean; Salanti, Georgia

    2016-01-01

    Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance,…

  17. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  18. Comparison of pre-processing techniques for fluorescence microscopy images of cells labeled for actin.

    PubMed

    Muralidhar, Gautam S; Channappayya, Sumohana S; Slater, John H; Blinka, Ellen M; Bovik, Alan C; Frey, Wolfgang; Markey, Mia K

    2008-11-06

    Automated analysis of fluorescence microscopy images of endothelial cells labeled for actin is important for quantifying changes in the actin cytoskeleton. The current manual approach is laborious and inefficient. The goal of our work is to develop automated image analysis methods, thereby increasing cell analysis throughput. In this study, we present preliminary results on comparing different algorithms for cell segmentation and image denoising.

  19. Security Analysis and Improvements to the PsychoPass Method

    PubMed Central

    2013-01-01

    Background In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. Objective To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. Methods We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. Results The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. Conclusions The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength. PMID:23942458

  20. Substructure analysis techniques and automation. [to eliminate logistical data handling and generation chores

    NASA Technical Reports Server (NTRS)

    Hennrich, C. W.; Konrath, E. J., Jr.

    1973-01-01

    A basic automated substructure analysis capability for NASTRAN is presented which eliminates most of the logistical data handling and generation chores that are currently associated with the method. Rigid formats are proposed which will accomplish this using three new modules, all of which can be added to level 16 with a relatively small effort.

  1. Digital Game-Based Learning for K-12 Mathematics Education: A Meta-Analysis

    ERIC Educational Resources Information Center

    Byun, JaeHwan; Joung, Eunmi

    2018-01-01

    Digital games (e.g., video games or computer games) have been reported as an effective educational method that can improve students' motivation and performance in mathematics education. This meta-analysis study (a) investigates the current trend of digital game-based learning (DGBL) by reviewing the research studies on the use of DGBL for…

  2. The Nexus of Place and Finance in the Analysis of Educational Attainment: A Spatial Econometric Approach

    ERIC Educational Resources Information Center

    Sutton, Farah

    2012-01-01

    This study examines the spatial distribution of educational attainment and then builds upon current predictive frameworks for understanding patterns of educational attainment by applying a spatial econometric method of analysis. The research from this study enables a new approach to the policy discussion on how to improve educational attainment…

  3. Knowledge-Base Semantic Gap Analysis for the Vulnerability Detection

    NASA Astrophysics Data System (ADS)

    Wu, Raymond; Seki, Keisuke; Sakamoto, Ryusuke; Hisada, Masayuki

    Web security became an alert in internet computing. To cope with ever-rising security complexity, semantic analysis is proposed to fill-in the gap that the current approaches fail to commit. Conventional methods limit their focus to the physical source codes instead of the abstraction of semantics. It bypasses new types of vulnerability and causes tremendous business loss.

  4. Perspectives on Using Video Recordings in Conversation Analytical Studies on Learning in Interaction

    ERIC Educational Resources Information Center

    Rusk, Fredrik; Pörn, Michaela; Sahlström, Fritjof; Slotte-Lüttge, Anna

    2015-01-01

    Video is currently used in many studies to document the interaction in conversation analytical (CA) studies on learning. The discussion on the method used in these studies has primarily focused on the analysis or the data construction, whereas the relation between data construction and analysis is rarely brought to attention. The aim of this…

  5. Analysis of Job Announcements and the Required Competencies for Instructional Technology Professionals.

    ERIC Educational Resources Information Center

    Moallem, Mahnaz

    A study was conducted to analyze current job announcements in the field of instructional design and technology and to produce descriptive information that portrays the required skills and areas of knowledge for instructional technology graduates. Content analysis, in its general terms, was used as the research method for this study. One hundred…

  6. Nanodevices for Single Molecule Studies

    NASA Astrophysics Data System (ADS)

    Craighead, H. G.; Stavis, S. M.; Samiee, K. T.

    During the last two decades, biotechnology research has resulted in progress in fields as diverse as the life sciences, agriculture and healthcare. While existing technology enables the analysis of a variety of biological systems, new tools are needed for increasing the efficiency of current methods, and for developing new ones altogether. Interest has grown in single molecule analysis for these reasons.

  7. A Critical Analysis of the CELF-4: The Responsible Clinician's Guide to the CELF-4

    ERIC Educational Resources Information Center

    Crowley, Catherine Jane

    2010-01-01

    Purpose: To provide an analysis of the accuracy and effectiveness of using the Clinical Evaluation of Language Fundamentals-Fourth Edition (CELF-4) to identify students as having language-based disabilities. Method: The CELF-4 is analyzed within the current standards set by the federal law on special education, the available research, preferred…

  8. The Evidence for Efficacy of HPV Vaccines: Investigations in Categorical Data Analysis

    ERIC Educational Resources Information Center

    Gibbs, Alison L.; Goossens, Emery T.

    2013-01-01

    Recent approval of HPV vaccines and their widespread provision to young women provide an interesting context to gain experience with the application of statistical methods in current research. We demonstrate how we have used data extracted from a meta-analysis examining the efficacy of HPV vaccines in clinical trials with students in applied…

  9. An Instructor's Diagnostic Aid for Feedback in Training.

    ERIC Educational Resources Information Center

    Andrews, Dee H.; Uliano, Kevin C.

    1988-01-01

    Instructor's Diagnostic Aid for Feedback in Training (IDAFT) is a computer-assisted method based on error analysis, domains of learning, and events of instruction. Its use with Navy team instructors is currently being explored. (JOW)

  10. MATHEMATICAL MODELING OF PESTICIDES IN THE ENVIRONMENT: CURRENT AND FUTURE DEVELOPMENTS

    EPA Science Inventory

    Transport models, total ecosystem models with aggregated linear approximations, evaluative models, hierarchical models, and influence analysis methods are mathematical techniques that are particularly applicable to the problems encountered when characterizing pesticide chemicals ...

  11. Bio-Contamination Control for Spacesuit Garments - A Preliminary Study

    NASA Technical Reports Server (NTRS)

    Rhodes, Richard; Korona, Adam; Orndoff, Evelyn; Ott, Mark; Poritz, Darwin

    2010-01-01

    This paper outlines a preliminary study to review, test, and improve upon the current state of spacesuit bio-contamination control. The study includes an evaluation of current and advanced suit materials, ground and on-orbit cleaning methods, and microbial test and analysis methods. The first aspect of this study was to identify potential anti-microbial textiles and cleaning agents, and to review current microbial test methods. The anti-microbial cleaning agent and textile market survey included a review of current commercial-off-the-shelf (COTS) products that could potentially be used as future space flight hardware. This review included replacements for any of the softgood layers that may become contaminated during an extravehicular activity (EVA), including the pressure bladder, liquid cooling garment, and ancillary comfort undergarment. After a series of COTS anti-microbial textiles and clean ing agents were identified, a series of four tests were conducted: (1) a stacked configuration test that was conducted in order to review how bio-contamination would propagate through the various suit layers, (2) a individual materials test that evaluated how well each softgood layer either promoted or repressed growth, (3) a cleaning agent test that evaluated the efficacy on each of the baseline bladders, and (4) an evaluation of various COTS anti-microbial textiles. All antimicrobial COTS materials tested appeared to control bacteria colony forming unit (CFU) growth better than the Thermal Comfort Undergarment (TCU) and ACES Liquid Cooling Garment (LCG)/EMU Liquid Cooling Ventilation Garment (LCVG) materials currently in use. However, a comparison of fungi CFU growth in COTS to current suit materials appeared to vary per material. All cleaning agents tested in this study appeared to inhibit the level of bacteria and fungi growth to acceptable levels for short duration tests. While several trends can be obtained from the current analysis, a series of test improvements are described for future microbial testing.

  12. About increasing informativity of diagnostic system of asynchronous electric motor by extracting additional information from values of consumed current parameter

    NASA Astrophysics Data System (ADS)

    Zhukovskiy, Y.; Korolev, N.; Koteleva, N.

    2018-05-01

    This article is devoted to expanding the possibilities of assessing the technical state of the current consumption of asynchronous electric drives, as well as increasing the information capacity of diagnostic methods, in conditions of limited access to equipment and incompleteness of information. The method of spectral analysis of the electric drive current can be supplemented by an analysis of the components of the current of the Park's vector. The research of the hodograph evolution in the moment of appearance and development of defects was carried out using the example of current asymmetry in the phases of an induction motor. The result of the study is the new diagnostic parameters of the asynchronous electric drive. During the research, it was proved that the proposed diagnostic parameters allow determining the type and level of the defect. At the same time, there is no need to stop the equipment and taky it out of service for repair. Modern digital control and monitoring systems can use the proposed parameters based on the stator current of an electrical machine to improve the accuracy and reliability of obtaining diagnostic patterns and predicting their changes in order to improve the equipment maintenance systems. This approach can also be used in systems and objects where there are significant parasitic vibrations and unsteady loads. The extraction of useful information can be carried out in electric drive systems in the structure of which there is a power electric converter.

  13. Removal of Differential Capacitive Interferences in Fast-Scan Cyclic Voltammetry.

    PubMed

    Johnson, Justin A; Hobbs, Caddy N; Wightman, R Mark

    2017-06-06

    Due to its high spatiotemporal resolution, fast-scan cyclic voltammetry (FSCV) at carbon-fiber microelectrodes enables the localized in vivo monitoring of subsecond fluctuations in electroactive neurotransmitter concentrations. In practice, resolution of the analytical signal relies on digital background subtraction for removal of the large current due to charging of the electrical double layer as well as surface faradaic reactions. However, fluctuations in this background current often occur with changes in the electrode state or ionic environment, leading to nonspecific contributions to the FSCV data that confound data analysis. Here, we both explore the origin of such shifts seen with local changes in cations and develop a model to account for their shape. Further, we describe a convolution-based method for removal of the differential capacitive contributions to the FSCV current. The method relies on the use of a small-amplitude pulse made prior to the FSCV sweep that probes the impedance of the system. To predict the nonfaradaic current response to the voltammetric sweep, the step current response is differentiated to provide an estimate of the system's impulse response function and is used to convolute the applied waveform. The generated prediction is then subtracted from the observed current to the voltammetric sweep, removing artifacts associated with electrode impedance changes. The technique is demonstrated to remove select contributions from capacitive characteristics changes of the electrode both in vitro (i.e., in flow-injection analysis) and in vivo (i.e., during a spreading depression event in an anesthetized rat).

  14. Functional Analyses and Treatment of Precursor Behavior

    PubMed Central

    Najdowski, Adel C; Wallace, Michele D; Ellsworth, Carrie L; MacAleese, Alicia N; Cleveland, Jackie M

    2008-01-01

    Functional analysis has been demonstrated to be an effective method to identify environmental variables that maintain problem behavior. However, there are cases when conducting functional analyses of severe problem behavior may be contraindicated. The current study applied functional analysis procedures to a class of behavior that preceded severe problem behavior (precursor behavior) and evaluated treatments based on the outcomes of the functional analyses of precursor behavior. Responding for all participants was differentiated during the functional analyses, and individualized treatments eliminated precursor behavior. These results suggest that functional analysis of precursor behavior may offer an alternative, indirect method to assess the operant function of severe problem behavior. PMID:18468282

  15. Antibody Epitope Analysis to Investigate Folded Structure, Allosteric Conformation, and Evolutionary Lineage of Proteins.

    PubMed

    Wong, Sienna; Jin, J-P

    2017-01-01

    Study of folded structure of proteins provides insights into their biological functions, conformational dynamics and molecular evolution. Current methods of elucidating folded structure of proteins are laborious, low-throughput, and constrained by various limitations. Arising from these methods is the need for a sensitive, quantitative, rapid and high-throughput method not only analysing the folded structure of proteins, but also to monitor dynamic changes under physiological or experimental conditions. In this focused review, we outline the foundation and limitations of current protein structure-determination methods prior to discussing the advantages of an emerging antibody epitope analysis for applications in structural, conformational and evolutionary studies of proteins. We discuss the application of this method using representative examples in monitoring allosteric conformation of regulatory proteins and the determination of the evolutionary lineage of related proteins and protein isoforms. The versatility of the method described herein is validated by the ability to modulate a variety of assay parameters to meet the needs of the user in order to monitor protein conformation. Furthermore, the assay has been used to clarify the lineage of troponin isoforms beyond what has been depicted by sequence homology alone, demonstrating the nonlinear evolutionary relationship between primary structure and tertiary structure of proteins. The antibody epitope analysis method is a highly adaptable technique of protein conformation elucidation, which can be easily applied without the need for specialized equipment or technical expertise. When applied in a systematic and strategic manner, this method has the potential to reveal novel and biomedically meaningful information for structure-function relationship and evolutionary lineage of proteins. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    PubMed

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  17. Parameter optimization of flux-aided backing-submerged arc welding by using Taguchi method

    NASA Astrophysics Data System (ADS)

    Pu, Juan; Yu, Shengfu; Li, Yuanyuan

    2017-07-01

    Flux-aided backing-submerged arc welding has been conducted on D36 steel with thickness of 20 mm. The effects of processing parameters such as welding current, voltage, welding speed and groove angle on welding quality were investigated by Taguchi method. The optimal welding parameters were predicted and the individual importance of each parameter on welding quality was evaluated by examining the signal-to-noise ratio and analysis of variance (ANOVA) results. The importance order of the welding parameters for the welding quality of weld bead was: welding current > welding speed > groove angle > welding voltage. The welding quality of weld bead increased gradually with increasing welding current and welding speed and decreasing groove angle. The optimum values of the welding current, welding speed, groove angle and welding voltage were found to be 1050 A, 27 cm/min, 40∘ and 34 V, respectively.

  18. Unavoidable electric current caused by inhomogeneities and its influence on measured material parameters of thermoelectric materials

    NASA Astrophysics Data System (ADS)

    Song, K.; Song, H. P.; Gao, C. F.

    2018-03-01

    It is well known that the key factor determining the performance of thermoelectric materials is the figure of merit, which depends on the thermal conductivity (TC), electrical conductivity, and Seebeck coefficient (SC). The electric current must be zero when measuring the TC and SC to avoid the occurrence of measurement errors. In this study, the complex-variable method is used to analyze the thermoelectric field near an elliptic inhomogeneity in an open circuit, and the field distributions are obtained in closed form. Our analysis shows that an electric current inevitably exists in both the matrix and the inhomogeneity even though the circuit is open. This unexpected electric current seriously affects the accuracy with which the TC and SC are measured. These measurement errors, both overall and local, are analyzed in detail. In addition, an error correction method is proposed based on the analytical results.

  19. ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.

    2011-04-20

    While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can bemore » applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.« less

  20. Forensic Discrimination of Latent Fingerprints Using Laser-Induced Breakdown Spectroscopy (LIBS) and Chemometric Approaches.

    PubMed

    Yang, Jun-Ho; Yoh, Jack J

    2018-01-01

    A novel technique is reported for separating overlapping latent fingerprints using chemometric approaches that combine laser-induced breakdown spectroscopy (LIBS) and multivariate analysis. The LIBS technique provides the capability of real time analysis and high frequency scanning as well as the data regarding the chemical composition of overlapping latent fingerprints. These spectra offer valuable information for the classification and reconstruction of overlapping latent fingerprints by implementing appropriate statistical multivariate analysis. The current study employs principal component analysis and partial least square methods for the classification of latent fingerprints from the LIBS spectra. This technique was successfully demonstrated through a classification study of four distinct latent fingerprints using classification methods such as soft independent modeling of class analogy (SIMCA) and partial least squares discriminant analysis (PLS-DA). The novel method yielded an accuracy of more than 85% and was proven to be sufficiently robust. Furthermore, through laser scanning analysis at a spatial interval of 125 µm, the overlapping fingerprints were reconstructed as separate two-dimensional forms.

  1. Predicting meat yields and commercial meat cuts from carcasses of young bulls of Spanish breeds by the SEUROP method and an image analysis system.

    PubMed

    Oliver, A; Mendizabal, J A; Ripoll, G; Albertí, P; Purroy, A

    2010-04-01

    The SEUROP system is currently in use for carcass classification in Europe. Image analysis and other new technologies are being developed to enhance and supplement this classification system. After slaughtering, 91 carcasses of local Spanish beef breeds were weighed and classified according to the SEUROP system. Two digital photographs (a side and a dorsal view) were taken of the left carcass sides, and a total of 33 morphometric measurements (lengths, perimeters, areas) were made. Commercial butchering of these carcasses took place 24 h postmortem, and the different cuts were grouped according to four commercial meat cut quality categories: extra, first, second, and third. Multiple regression analysis of carcass weight and the SEUROP conformation score (x variables) on meat yield and the four commercial cut quality category yields (y variables) was performed as a measure of the accuracy of the SEUROP system. Stepwise regression analysis of carcass weight and the 33 morphometric image analysis measurements (x variables) and meat yield and yields of the four commercial cut quality categories (y variables) was carried out. Higher accuracy was achieved using image analysis than using only the current SEUROP conformation score. The regression coefficient values were between R(2)=0.66 and R(2)=0.93 (P<0.001) for the SEUROP system and between R(2)=0.81 and R(2)=0.94 (P<0.001) for the image analysis method. These results suggest that the image analysis method should be helpful as a means of supplementing and enhancing the SEUROP system for grading beef carcasses. 2009 Elsevier Ltd. All rights reserved.

  2. Python package for model STructure ANalysis (pySTAN)

    NASA Astrophysics Data System (ADS)

    Van Hoey, Stijn; van der Kwast, Johannes; Nopens, Ingmar; Seuntjens, Piet

    2013-04-01

    The selection and identification of a suitable hydrological model structure is more than fitting parameters of a model structure to reproduce a measured hydrograph. The procedure is highly dependent on various criteria, i.e. the modelling objective, the characteristics and the scale of the system under investigation as well as the available data. Rigorous analysis of the candidate model structures is needed to support and objectify the selection of the most appropriate structure for a specific case (or eventually justify the use of a proposed ensemble of structures). This holds both in the situation of choosing between a limited set of different structures as well as in the framework of flexible model structures with interchangeable components. Many different methods to evaluate and analyse model structures exist. This leads to a sprawl of available methods, all characterized by different assumptions, changing conditions of application and various code implementations. Methods typically focus on optimization, sensitivity analysis or uncertainty analysis, with backgrounds from optimization, machine-learning or statistics amongst others. These methods also need an evaluation metric (objective function) to compare the model outcome with some observed data. However, for current methods described in literature, implementations are not always transparent and reproducible (if available at all). No standard procedures exist to share code and the popularity (and amount of applications) of the methods is sometimes more dependent on the availability than the merits of the method. Moreover, new implementations of existing methods are difficult to verify and the different theoretical backgrounds make it difficult for environmental scientists to decide about the usefulness of a specific method. A common and open framework with a large set of methods can support users in deciding about the most appropriate method. Hence, it enables to simultaneously apply and compare different methods on a fair basis. We developed and present pySTAN (python framework for STructure Analysis), a python package containing a set of functions for model structure evaluation to provide the analysis of (hydrological) model structures. A selected set of algorithms for optimization, uncertainty and sensitivity analysis is currently available, together with a set of evaluation (objective) functions and input distributions to sample from. The methods are implemented model-independent and the python language provides the wrapper functions to apply administer external model codes. Different objective functions can be considered simultaneously with both statistical metrics and more hydrology specific metrics. By using so-called reStructuredText (sphinx documentation generator) and Python documentation strings (docstrings), the generation of manual pages is semi-automated and a specific environment is available to enhance both the readability and transparency of the code. It thereby enables a larger group of users to apply and compare these methods and to extend the functionalities.

  3. Security analysis and improvements to the PsychoPass method.

    PubMed

    Brumen, Bostjan; Heričko, Marjan; Rozman, Ivan; Hölbl, Marko

    2013-08-13

    In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength.

  4. Development and applications of single particle orientation and rotational tracking in dynamic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Kuangcai

    The goal of this study is to help with future data analysis and experiment designs in rotational dynamics research using DIC-based SPORT technique. Most of the current studies using DIC-based SPORT techniques are technical demonstrations. Understanding the mechanisms behind the observed rotational behaviors of the imaging probes should be the focus of the future SPORT studies. More efforts are still needed in the development of new imaging probes, particle tracking methods, instrumentations, and advanced data analysis methods to further extend the potential of DIC-based SPORT technique.

  5. Ordinal preference elicitation methods in health economics and health services research: using discrete choice experiments and ranking methods.

    PubMed

    Ali, Shehzad; Ronaldson, Sarah

    2012-09-01

    The predominant method of economic evaluation is cost-utility analysis, which uses cardinal preference elicitation methods, including the standard gamble and time trade-off. However, such approach is not suitable for understanding trade-offs between process attributes, non-health outcomes and health outcomes to evaluate current practices, develop new programmes and predict demand for services and products. Ordinal preference elicitation methods including discrete choice experiments and ranking methods are therefore commonly used in health economics and health service research. Cardinal methods have been criticized on the grounds of cognitive complexity, difficulty of administration, contamination by risk and preference attitudes, and potential violation of underlying assumptions. Ordinal methods have gained popularity because of reduced cognitive burden, lower degree of abstract reasoning, reduced measurement error, ease of administration and ability to use both health and non-health outcomes. The underlying assumptions of ordinal methods may be violated when respondents use cognitive shortcuts, or cannot comprehend the ordinal task or interpret attributes and levels, or use 'irrational' choice behaviour or refuse to trade-off certain attributes. CURRENT USE AND GROWING AREAS: Ordinal methods are commonly used to evaluate preference for attributes of health services, products, practices, interventions, policies and, more recently, to estimate utility weights. AREAS FOR ON-GOING RESEARCH: There is growing research on developing optimal designs, evaluating the rationalization process, using qualitative tools for developing ordinal methods, evaluating consistency with utility theory, appropriate statistical methods for analysis, generalizability of results and comparing ordinal methods against each other and with cardinal measures.

  6. Analysis of current density and specific absorption rate in biological tissue surrounding an air-core type of transcutaneous transformer for an artificial heart.

    PubMed

    Shiba, Kenji; Nukaya, Masayuki; Tsuji, Toshio; Koshiji, Kohji

    2006-01-01

    This paper reports on the specific absorption rate (SAR) and the current density analysis of biological tissue surrounding an air-core type of transcutaneous transformer for an artificial heart. The electromagnetic field in the biological tissue surrounding the transformer was analyzed by the transmission-line modeling method, and the SAR and current density as a function of frequency (200k-1 MHz) for a transcutaneous transmission of 20 W were calculated. The model's biological tissue has three layers including the skin, fat and muscle. As a result, the SAR in the vicinity of the transformer is sufficiently small and the normalized SAR value, which is divided by the ICNIRP's basic restriction, is 7 x 10(-3) or less. On the contrary, the current density is slightly in excess of the ICNIRP's basic restrictions as the frequency falls and the output voltage rises. Normalized current density is from 0.2 to 1.2. In addition, the layer in which the current's density is maximized depends on the frequency, the muscle in the low frequency (<700 kHz) and the skin in the high frequency (>700 kHz). The result shows that precision analysis taking into account the biological properties is very important for developing the transcutaneous transformer for TAH.

  7. Performance analysis of FET microwave devices by use of extended spectral-element time-domain method

    NASA Astrophysics Data System (ADS)

    Sheng, Yijun; Xu, Kan; Wang, Daoxiang; Chen, Rushan

    2013-05-01

    The extended spectral-element time-domain (SETD) method is employed to analyse field effect transistor (FET) microwave devices. In order to impose the contribution of the FET microwave devices into the electromagnetic simulation, the SETD method is extended by introducing a lumped current term into the vector Helmholtz equation. The change of currents on each lumped component can be expressed by the change of voltage via corresponding models of equivalent circuit. The electric fields around the lumped component must be influenced by the change of voltage on each lumped component, and vice versa. So a global coupling about the EM-circuit can be built directly. The fully explicit solving scheme is maintained in this extended SETD method and the CPU time can be saved spontaneously. Three practical FET microwave devices are analysed in this article. The numerical results demonstrate the ability and accuracy of this method.

  8. [Effect of non-pharmacological methods for alleviation of pain in newborns].

    PubMed

    Chromá, Jana; Sikorová, Lucie

    2012-01-01

    The aim of the paper is to analyze currently most used non-pharmacological methods for pain alleviation in newborns for the best evidence-based practice. Source of the required data for the period 2000-2011 were electronic licensed and freely accessible databases. Evaluation found evidence (30 studies) was carried out according to the table-level evidence (Fineout-Overholt, Johnston 2005). The selection was included in the evidence level I, II, III. Nutritive sucking is currently considered the most effective method for alleviating pain in newborns. Analysis of studies shows that non-pharmacological methods used to control pain in neonates are much more effective when used in combination with other non-pharmacological methods, such as music therapy, swaddling, facilitated tucking, multiple-stimulation, kangaroo care and non-nutritive suction. Non-pharmacological procedures are effective and lead to pain relief especially in procedural performance as heel lancet and venipuncture for blood sampling, etc.

  9. Critical current density measurement of striated multifilament-coated conductors using a scanning Hall probe microscope

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Fen; Kochat, Mehdi; Majkic, Goran; Selvamanickam, Venkat

    2016-08-01

    In this paper the authors succeeded in measuring the critical current density ({J}{{c}}) of multifilament-coated conductors (CCs) with thin filaments as low as 0.25 mm using the scanning hall probe microscope (SHPM) technique. A new iterative method of data analysis is developed to make the calculation of {J}{{c}} for thin filaments possible, even without a very small scan distance. The authors also discussed in detail the advantage and limitation of the iterative method using both simulation and experiment results. The results of the new method correspond well with the traditional fast Fourier transform method where this is still applicable. However, the new method is applicable for the filamentized CCs in much wider measurement conditions such as with thin filament and a large scan distance, thus overcoming the barrier for application of the SHPM technique on {J}{{c}} measurement of long filamentized CCs with narrow filaments.

  10. A thioacidolysis method tailored for higher‐throughput quantitative analysis of lignin monomers

    PubMed Central

    Foster, Cliff; Happs, Renee M.; Doeppke, Crissa; Meunier, Kristoffer; Gehan, Jackson; Yue, Fengxia; Lu, Fachuang; Davis, Mark F.

    2016-01-01

    Abstract Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β‐O‐4 linkages. Current thioacidolysis methods are low‐throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non‐chlorinated organic solvent and is tailored for higher‐throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1–2 mg of biomass per assay and has been quantified using fast‐GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, including standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day‐to‐day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. The method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses. PMID:27534715

  11. Comparative Validation of the Determination of Sofosbuvir in Pharmaceuticals by Several Inexpensive Ecofriendly Chromatographic, Electrophoretic, and Spectrophotometric Methods.

    PubMed

    El-Yazbi, Amira F

    2017-01-20

    Sofosbuvir (SOFO) was approved by the U.S. Food and Drug Administration in 2013 for the treatment of hepatitis C virusinfection with enhanced antiviral potency compared with earlier analogs. Notwithstanding, all current editions of the pharmacopeias still do not present any analytical methods for the quantification of SOFO. Thus, rapid, simple, and ecofriendly methods for the routine analysis of commercial formulations of SOFO are desirable. In this study, five accurate methods for the determination of SOFO in pharmaceutical tablets were developed and validated. These methods include HPLC, capillary zone electrophoresis, HPTLC, and UV spectrophotometric and derivative spectrometry methods. The proposed methods proved to be rapid, simple, sensitive, selective, and accurate analytical procedures that were suitable for the reliable determination of SOFO in pharmaceutical tablets. An analysis of variance test with <em>P</em>-value &#x003E; 0.05 confirmed that there were no significant differences between the proposed assays. Thus, any of these methods can be used for the routine analysis of SOFO in commercial tablets.

  12. A thioacidolysis method tailored for higher-throughput quantitative analysis of lignin monomers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harman-Ware, Anne E.; Foster, Cliff; Happs, Renee M.

    Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β-O-4 linkages. Current thioacidolysis methods are low-throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non-chlorinated organic solvent and is tailored for higher-throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1-2 mg of biomass per assay and has been quantified using fast-GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, includingmore » standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day-to-day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. As a result, the method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses.« less

  13. A thioacidolysis method tailored for higher-throughput quantitative analysis of lignin monomers

    DOE PAGES

    Harman-Ware, Anne E.; Foster, Cliff; Happs, Renee M.; ...

    2016-09-14

    Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β-O-4 linkages. Current thioacidolysis methods are low-throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non-chlorinated organic solvent and is tailored for higher-throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1-2 mg of biomass per assay and has been quantified using fast-GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, includingmore » standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day-to-day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. As a result, the method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses.« less

  14. Adaptive methods for nonlinear structural dynamics and crashworthiness analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted

    1993-01-01

    The objective is to describe three research thrusts in crashworthiness analysis: adaptivity; mixed time integration, or subcycling, in which different timesteps are used for different parts of the mesh in explicit methods; and methods for contact-impact which are highly vectorizable. The techniques are being developed to improve the accuracy of calculations, ease-of-use of crashworthiness programs, and the speed of calculations. The latter is still of importance because crashworthiness calculations are often made with models of 20,000 to 50,000 elements using explicit time integration and require on the order of 20 to 100 hours on current supercomputers. The methodologies are briefly reviewed and then some example calculations employing these methods are described. The methods are also of value to other nonlinear transient computations.

  15. Formal Methods for Life-Critical Software

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Johnson, Sally C.

    1993-01-01

    The use of computer software in life-critical applications, such as for civil air transports, demands the use of rigorous formal mathematical verification procedures. This paper demonstrates how to apply formal methods to the development and verification of software by leading the reader step-by-step through requirements analysis, design, implementation, and verification of an electronic phone book application. The current maturity and limitations of formal methods tools and techniques are then discussed, and a number of examples of the successful use of formal methods by industry are cited.

  16. Microstates in resting-state EEG: current status and future directions.

    PubMed

    Khanna, Arjun; Pascual-Leone, Alvaro; Michel, Christoph M; Farzan, Faranak

    2015-02-01

    Electroencephalography (EEG) is a powerful method of studying the electrophysiology of the brain with high temporal resolution. Several analytical approaches to extract information from the EEG signal have been proposed. One method, termed microstate analysis, considers the multichannel EEG recording as a series of quasi-stable "microstates" that are each characterized by a unique topography of electric potentials over the entire channel array. Because this technique simultaneously considers signals recorded from all areas of the cortex, it is capable of assessing the function of large-scale brain networks whose disruption is associated with several neuropsychiatric disorders. In this review, we first introduce the method of EEG microstate analysis. We then review studies that have discovered significant changes in the resting-state microstate series in a variety of neuropsychiatric disorders and behavioral states. We discuss the potential utility of this method in detecting neurophysiological impairments in disease and monitoring neurophysiological changes in response to an intervention. Finally, we discuss how the resting-state microstate series may reflect rapid switching among neural networks while the brain is at rest, which could represent activity of resting-state networks described by other neuroimaging modalities. We conclude by commenting on the current and future status of microstate analysis, and suggest that EEG microstates represent a promising neurophysiological tool for understanding and assessing brain network dynamics on a millisecond timescale in health and disease. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Microstates in Resting-State EEG: Current Status and Future Directions

    PubMed Central

    Khanna, Arjun; Pascual-Leone, Alvaro; Michel, Christoph M.; Farzan, Faranak

    2015-01-01

    Electroencephalography (EEG) is a powerful method of studying the electrophysiology of the brain with high temporal resolution. Several analytical approaches to extract information from the EEG signal have been proposed. One method, termed microstate analysis, considers the multichannel EEG recording as a series of quasi-stable “microstates” that are each characterized by a unique topography of electric potentials over the entire channel array. Because this technique simultaneously considers signals recorded from all areas of the cortex, it is capable of assessing the function of large-scale brain networks whose disruption is associated with several neuropsychiatric disorders. In this review, we first introduce the method of EEG microstate analysis. We then review studies that have discovered significant changes in the resting-state microstate series in a variety of neuropsychiatric disorders and behavioral states. We discuss the potential utility of this method in detecting neurophysiological impairments in disease and monitoring neurophysiological changes in response to an intervention. Finally, we discuss how the resting-state microstate series may reflect rapid switching among neural networks while the brain is at rest, which could represent activity of resting-state networks described by other neuroimaging modalities. We conclude by commenting on the current and future status of microstate analysis, and suggest that EEG microstates represent a promising neurophysiological tool for understanding and assessing brain network dynamics on a millisecond timescale in health and disease. PMID:25526823

  18. Drosophila learn efficient paths to a food source.

    PubMed

    Navawongse, Rapeechai; Choudhury, Deepak; Raczkowska, Marlena; Stewart, James Charles; Lim, Terrence; Rahman, Mashiur; Toh, Alicia Guek Geok; Wang, Zhiping; Claridge-Chang, Adam

    2016-05-01

    Elucidating the genetic, and neuronal bases for learned behavior is a central problem in neuroscience. A leading system for neurogenetic discovery is the vinegar fly Drosophila melanogaster; fly memory research has identified genes and circuits that mediate aversive and appetitive learning. However, methods to study adaptive food-seeking behavior in this animal have lagged decades behind rodent feeding analysis, largely due to the challenges presented by their small scale. There is currently no method to dynamically control flies' access to food. In rodents, protocols that use dynamic food delivery are a central element of experimental paradigms that date back to the influential work of Skinner. This method is still commonly used in the analysis of learning, memory, addiction, feeding, and many other subjects in experimental psychology. The difficulty of microscale food delivery means this is not a technique used in fly behavior. In the present manuscript we describe a microfluidic chip integrated with machine vision and automation to dynamically control defined liquid food presentations and sensory stimuli. Strikingly, repeated presentations of food at a fixed location produced improvements in path efficiency during food approach. This shows that improved path choice is a learned behavior. Active control of food availability using this microfluidic system is a valuable addition to the methods currently available for the analysis of learned feeding behavior in flies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  19. CHRONOS: a time-varying method for microRNA-mediated subpathway enrichment analysis.

    PubMed

    Vrahatis, Aristidis G; Dimitrakopoulou, Konstantina; Balomenos, Panos; Tsakalidis, Athanasios K; Bezerianos, Anastasios

    2016-03-15

    In the era of network medicine and the rapid growth of paired time series mRNA/microRNA expression experiments, there is an urgent need for pathway enrichment analysis methods able to capture the time- and condition-specific 'active parts' of the biological circuitry as well as the microRNA impact. Current methods ignore the multiple dynamical 'themes'-in the form of enriched biologically relevant microRNA-mediated subpathways-that determine the functionality of signaling networks across time. To address these challenges, we developed time-vaRying enriCHment integrOmics Subpathway aNalysis tOol (CHRONOS) by integrating time series mRNA/microRNA expression data with KEGG pathway maps and microRNA-target interactions. Specifically, microRNA-mediated subpathway topologies are extracted and evaluated based on the temporal transition and the fold change activity of the linked genes/microRNAs. Further, we provide measures that capture the structural and functional features of subpathways in relation to the complete organism pathway atlas. Our application to synthetic and real data shows that CHRONOS outperforms current subpathway-based methods into unraveling the inherent dynamic properties of pathways. CHRONOS is freely available at http://biosignal.med.upatras.gr/chronos/ tassos.bezerianos@nus.edu.sg Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Brightness-preserving fuzzy contrast enhancement scheme for the detection and classification of diabetic retinopathy disease.

    PubMed

    Datta, Niladri Sekhar; Dutta, Himadri Sekhar; Majumder, Koushik

    2016-01-01

    The contrast enhancement of retinal image plays a vital role for the detection of microaneurysms (MAs), which are an early sign of diabetic retinopathy disease. A retinal image contrast enhancement method has been presented to improve the MA detection technique. The success rate on low-contrast noisy retinal image analysis shows the importance of the proposed method. Overall, 587 retinal input images are tested for performance analysis. The average sensitivity and specificity are obtained as 95.94% and 99.21%, respectively. The area under curve is found as 0.932 for the receiver operating characteristics analysis. The classifications of diabetic retinopathy disease are also performed here. The experimental results show that the overall MA detection method performs better than the current state-of-the-art MA detection algorithms.

Top