Martínez-Mier, E. Angeles; Soto-Rojas, Armando E.; Buckley, Christine M.; Margineda, Jorge; Zero, Domenick T.
2010-01-01
Objective The aim of this study was to assess methods currently used for analyzing fluoridated salt in order to identify the most useful method for this type of analysis. Basic research design Seventy-five fluoridated salt samples were obtained. Samples were analyzed for fluoride content, with and without pretreatment, using direct and diffusion methods. Element analysis was also conducted in selected samples. Fluoride was added to ultra pure NaCl and non-fluoridated commercial salt samples and Ca and Mg were added to fluoride samples in order to assess fluoride recoveries using modifications to the methods. Results Larger amounts of fluoride were found and recovered using diffusion than direct methods (96%–100% for diffusion vs. 67%–90% for direct). Statistically significant differences were obtained between direct and diffusion methods using different ion strength adjusters. Pretreatment methods reduced the amount of recovered fluoride. Determination of fluoride content was influenced both by the presence of NaCl and other ions in the salt. Conclusion Direct and diffusion techniques for analysis of fluoridated salt are suitable methods for fluoride analysis. The choice of method should depend on the purpose of the analysis. PMID:20088217
NASA Astrophysics Data System (ADS)
Cheng, Jian; Yue, Huiqiang; Yu, Shengjiao; Liu, Tiegang
2018-06-01
In this paper, an adjoint-based high-order h-adaptive direct discontinuous Galerkin method is developed and analyzed for the two dimensional steady state compressible Navier-Stokes equations. Particular emphasis is devoted to the analysis of the adjoint consistency for three different direct discontinuous Galerkin discretizations: including the original direct discontinuous Galerkin method (DDG), the direct discontinuous Galerkin method with interface correction (DDG(IC)) and the symmetric direct discontinuous Galerkin method (SDDG). Theoretical analysis shows the extra interface correction term adopted in the DDG(IC) method and the SDDG method plays a key role in preserving the adjoint consistency. To be specific, for the model problem considered in this work, we prove that the original DDG method is not adjoint consistent, while the DDG(IC) method and the SDDG method can be adjoint consistent with appropriate treatment of boundary conditions and correct modifications towards the underlying output functionals. The performance of those three DDG methods is carefully investigated and evaluated through typical test cases. Based on the theoretical analysis, an adjoint-based h-adaptive DDG(IC) method is further developed and evaluated, numerical experiment shows its potential in the applications of adjoint-based adaptation for simulating compressible flows.
Comparing direct and iterative equation solvers in a large structural analysis software system
NASA Technical Reports Server (NTRS)
Poole, E. L.
1991-01-01
Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.
Vinklárková, Bára; Chromý, Vratislav; Šprongl, Luděk; Bittová, Miroslava; Rikanová, Milena; Ohnútková, Ivana; Žaludová, Lenka
2015-01-01
To select a Kjeldahl procedure suitable for the determination of total protein in reference materials used in laboratory medicine, we reviewed in our previous article Kjeldahl methods adopted by clinical chemistry and found an indirect two-step analysis by total Kjeldahl nitrogen corrected for its nonprotein nitrogen and a direct analysis made on isolated protein precipitates. In this article, we compare both procedures on various reference materials. An indirect Kjeldahl method gave falsely lower results than a direct analysis. Preliminary performance parameters qualify the direct Kjeldahl analysis as a suitable primary reference procedure for the certification of total protein in reference laboratories.
Rapid enumeration of viable bacteria by image analysis
NASA Technical Reports Server (NTRS)
Singh, A.; Pyle, B. H.; McFeters, G. A.
1989-01-01
A direct viable counting method for enumerating viable bacteria was modified and made compatible with image analysis. A comparison was made between viable cell counts determined by the spread plate method and direct viable counts obtained using epifluorescence microscopy either manually or by automatic image analysis. Cultures of Escherichia coli, Salmonella typhimurium, Vibrio cholerae, Yersinia enterocolitica and Pseudomonas aeruginosa were incubated at 35 degrees C in a dilute nutrient medium containing nalidixic acid. Filtered samples were stained for epifluorescence microscopy and analysed manually as well as by image analysis. Cells enlarged after incubation were considered viable. The viable cell counts determined using image analysis were higher than those obtained by either the direct manual count of viable cells or spread plate methods. The volume of sample filtered or the number of cells in the original sample did not influence the efficiency of the method. However, the optimal concentration of nalidixic acid (2.5-20 micrograms ml-1) and length of incubation (4-8 h) varied with the culture tested. The results of this study showed that under optimal conditions, the modification of the direct viable count method in combination with image analysis microscopy provided an efficient and quantitative technique for counting viable bacteria in a short time.
The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Overman, Andrea L.
1988-01-01
Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.
NASA Astrophysics Data System (ADS)
He, A.; Quan, C.
2018-04-01
The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.
Krimmel, R.M.
1999-01-01
Net mass balance has been measured since 1958 at South Cascade Glacier using the 'direct method,' e.g. area averages of snow gain and firn and ice loss at stakes. Analysis of cartographic vertical photography has allowed measurement of mass balance using the 'geodetic method' in 1970, 1975, 1977, 1979-80, and 1985-97. Water equivalent change as measured by these nearly independent methods should give similar results. During 1970-97, the direct method shows a cumulative balance of about -15 m, and the geodetic method shows a cumulative balance of about -22 m. The deviation between the two methods is fairly consistent, suggesting no gross errors in either, but rather a cumulative systematic error. It is suspected that the cumulative error is in the direct method because the geodetic method is based on a non-changing reference, the bedrock control, whereas the direct method is measured with reference to only the previous year's summer surface. Possible sources of mass loss that are missing from the direct method are basal melt, internal melt, and ablation on crevasse walls. Possible systematic measurement errors include under-estimation of the density of lost material, sinking stakes, or poorly represented areas.
Aeroacoustic directivity via wave-packet analysis of mean or base flows
NASA Astrophysics Data System (ADS)
Edstrand, Adam; Schmid, Peter; Cattafesta, Louis
2017-11-01
Noise pollution is an ever-increasing problem in society, and knowledge of the directivity patterns of the sound radiation is required for prediction and control. Directivity is frequently determined through costly numerical simulations of the flow field combined with an acoustic analogy. We introduce a new computationally efficient method of finding directivity for a given mean or base flow field using wave-packet analysis (Trefethen, PRSA 2005). Wave-packet analysis approximates the eigenvalue spectrum with spectral accuracy by modeling the eigenfunctions as wave packets. With the wave packets determined, we then follow the method of Obrist (JFM, 2009), which uses Lighthill's acoustic analogy to determine the far-field sound radiation and directivity of wave-packet modes. We apply this method to a canonical jet flow (Gudmundsson and Colonius, JFM 2011) and determine the directivity of potentially unstable wave packets. Furthermore, we generalize the method to consider a three-dimensional flow field of a trailing vortex wake. In summary, we approximate the disturbances as wave packets and extract the directivity from the wave-packet approximation in a fraction of the time of standard aeroacoustic solvers. ONR Grant N00014-15-1-2403.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony
Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.
Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...
2018-04-20
Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.
West, Phillip B [Idaho Falls, ID; Novascone, Stephen R [Idaho Falls, ID; Wright, Jerry P [Idaho Falls, ID
2012-05-29
Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.
West, Phillip B [Idaho Falls, ID; Novascone, Stephen R [Idaho Falls, ID; Wright, Jerry P [Idaho Falls, ID
2011-09-27
Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.
ERIC Educational Resources Information Center
Friman, Margareta; Nyberg, Claes; Norlander, Torsten
2004-01-01
A descriptive qualitative analysis of in-depth interviews involving seven provincial Soccer Association referees was carried out in order to find out how referees experience threats and aggression directed to soccer referees. The Empirical Phenomenological Psychological method (EPP-method) was used. The analysis resulted in thirty categories which…
Heading in the right direction: thermodynamics-based network analysis and pathway engineering.
Ataman, Meric; Hatzimanikatis, Vassily
2015-12-01
Thermodynamics-based network analysis through the introduction of thermodynamic constraints in metabolic models allows a deeper analysis of metabolism and guides pathway engineering. The number and the areas of applications of thermodynamics-based network analysis methods have been increasing in the last ten years. We review recent applications of these methods and we identify the areas that such analysis can contribute significantly, and the needs for future developments. We find that organisms with multiple compartments and extremophiles present challenges for modeling and thermodynamics-based flux analysis. The evolution of current and new methods must also address the issues of the multiple alternatives in flux directionalities and the uncertainties and partial information from analytical methods. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
A power analysis for multivariate tests of temporal trend in species composition.
Irvine, Kathryn M; Dinger, Eric C; Sarr, Daniel
2011-10-01
Long-term monitoring programs emphasize power analysis as a tool to determine the sampling effort necessary to effectively document ecologically significant changes in ecosystems. Programs that monitor entire multispecies assemblages require a method for determining the power of multivariate statistical models to detect trend. We provide a method to simulate presence-absence species assemblage data that are consistent with increasing or decreasing directional change in species composition within multiple sites. This step is the foundation for using Monte Carlo methods to approximate the power of any multivariate method for detecting temporal trends. We focus on comparing the power of the Mantel test, permutational multivariate analysis of variance, and constrained analysis of principal coordinates. We find that the power of the various methods we investigate is sensitive to the number of species in the community, univariate species patterns, and the number of sites sampled over time. For increasing directional change scenarios, constrained analysis of principal coordinates was as or more powerful than permutational multivariate analysis of variance, the Mantel test was the least powerful. However, in our investigation of decreasing directional change, the Mantel test was typically as or more powerful than the other models.
The high-order decoupled direct method in three dimensions for particular matter (HDDM-3D/PM) has been implemented in the Community Multiscale Air Quality (CMAQ) model to enable advanced sensitivity analysis. The major effort of this work is to develop high-order DDM sensitivity...
Full-field local displacement analysis of two-sided paperboard
J.M. Considine; D.W. Vahey
2007-01-01
This report describes a method to examine full-field displacements of both sides of paperboard during tensile testing. Analysis showed out-of-plane shear behavior near the failures zones. The method was reliably used to examine out-of-plane shear in double notch shear specimens. Differences in shear behavior of machine direction and cross-machine direction specimens...
Lee, Dae-Hee; Kim, Hyun-Jung; Ahn, Hyeong-Sik; Bin, Seong-Il
2016-12-01
Although three-dimensional computed tomography (3D-CT) has been used to compare femoral tunnel position following transtibial and anatomical anterior cruciate ligament (ACL) reconstruction, no consensus has been reached on which technique results in a more anatomical position because methods of quantifying femoral tunnel position on 3D-CT have not been consistent. This meta-analysis was therefore performed to compare femoral tunnel location following transtibial and anatomical ACL reconstruction, in both the low-to-high and deep-to-shallow directions. This meta-analysis included all studies that used 3D-CT to compare femoral tunnel location, using quadrant or anatomical coordinate axis methods, following transtibial and anatomical (AM portal or OI) single-bundle ACL reconstruction. Six studies were included in the meta-analysis. Femoral tunnel location was 18 % higher in the low-to-high direction, but was not significant in the deep-to-shallow direction, using the transtibial technique than the anatomical methods, when measured using the anatomical coordinate axis method. When measured using the quadrant method, however, femoral tunnel positions were significantly higher (21 %) and shallower (6 %) with transtibial than anatomical methods of ACL reconstruction. The anatomical ACL reconstruction techniques led to a lower femoral tunnel aperture location than the transtibial technique, suggesting the superiority of anatomical techniques for creating new femoral tunnels during revision ACL reconstruction in femoral tunnel aperture location in the low-to-high direction. However, the mean difference in the deep-to-shallow direction differed by method of measurement. Meta-analysis, Level II.
Motif-Synchronization: A new method for analysis of dynamic brain networks with EEG
NASA Astrophysics Data System (ADS)
Rosário, R. S.; Cardoso, P. T.; Muñoz, M. A.; Montoya, P.; Miranda, J. G. V.
2015-12-01
The major aim of this work was to propose a new association method known as Motif-Synchronization. This method was developed to provide information about the synchronization degree and direction between two nodes of a network by counting the number of occurrences of some patterns between any two time series. The second objective of this work was to present a new methodology for the analysis of dynamic brain networks, by combining the Time-Varying Graph (TVG) method with a directional association method. We further applied the new algorithms to a set of human electroencephalogram (EEG) signals to perform a dynamic analysis of the brain functional networks (BFN).
Sheela, Shekaraiah; Aithal, Venkataraja U; Rajashekhar, Bellur; Lewis, Melissa Glenda
2016-01-01
Tracheoesophageal (TE) prosthetic voice is one of the voice restoration options for individuals who have undergone a total laryngectomy. Aerodynamic analysis of the TE voice provides insight into the physiological changes that occur at the level of the neoglottis with voice prosthesis in situ. The present study is a systematic review and meta-analysis of sub-neoglottic pressure (SNP) measurement in TE speakers by direct and indirect methods. The screening of abstracts and titles was carried out for inclusion of articles using 10 electronic databases spanning the period from 1979 to 2016. Ten articles which met the inclusion criteria were considered for meta-analysis with a pooled age range of 40-83 years. The pooled mean SNP obtained from the direct measurement method was 53.80 cm H2O with a 95% confidence interval of 21.14-86.46 cm H2O, while for the indirect measurement method, the mean SNP was 23.55 cm H2O with a 95% confidence interval of 19.23-27.87 cm H2O. Based on the literature review, the various procedures followed for direct and indirect measurements of SNP contributed to a range of differences in outcome measures. The meta-analysis revealed that the "interpolation method" for indirect estimation of SNP was the most acceptable and valid method in TE speakers. © 2017 S. Karger AG, Basel.
Vo, Evanly; Zhuang, Ziqing; Birch, Eileen; Birch, Quinn
2016-01-01
The aim of this study was to apply a direct-reading aerosol instrument method and an elemental carbon (EC) analysis method to measure the mass-based penetration of single-walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs) through elastomeric half-mask respirators (EHRs) and filtering facepiece respirators (FFRs). For the direct-reading aerosol instrument method, two scanning mobility particle sizer/aerodynamic particle sizer systems were used to simultaneously determine the upstream (outside respirator) and downstream (inside respirator) test aerosols. For the EC analysis method, upstream and downstream CNTs were collected on filter cassettes and then analyzed using a thermal-optical technique. CNT mass penetrations were found in both methods to be within the associated efficiency requirements for each type and class of the respirator models that were tested. Generally, the penetrations of SWCNTs and MWCNTs had a similar trend with penetration being the highest for the N95 EHRs, followed by N95 FFRs, P100 EHRs, and P100 FFRs. This trend held true for both methods; however, the CNT penetration determined by the direct-reading aerosol instrument method (0.009-1.09% for SWCNTs and 0.005-0.21% for MWCNTs) was greater relative to the penetration values found through EC analysis method (0.007-0.69% for SWCNTs and 0.004-0.13% for MWCNTs). The results of this study illustrate considerations for how the methods can be used to evaluate penetration of morphologically complex materials through FFRs and EHRs.
Clemons, Kristina; Dake, Jeffrey; Sisco, Edward; Verbeck, Guido F
2013-09-10
Direct analysis in real time mass spectrometry (DART-MS) has proven to be a useful forensic tool for the trace analysis of energetic materials. While other techniques for detecting trace amounts of explosives involve extraction, derivatization, solvent exchange, or sample clean-up, DART-MS requires none of these. Typical DART-MS analyses directly from a solid sample or from a swab have been quite successful; however, these methods may not always be an optimal sampling technique in a forensic setting. For example, if the sample were only located in an area which included a latent fingerprint of interest, direct DART-MS analysis or the use of a swab would almost certainly destroy the print. To avoid ruining such potentially invaluable evidence, another method has been developed which will leave the fingerprint virtually untouched. Direct analyte-probed nanoextraction coupled to nanospray ionization-mass spectrometry (DAPNe-NSI-MS) has demonstrated excellent sensitivity and repeatability in forensic analyses of trace amounts of illicit drugs from various types of surfaces. This technique employs a nanomanipulator in conjunction with bright-field microscopy to extract single particles from a surface of interest and has provided a limit of detection of 300 attograms for caffeine. Combining DAPNe with DART-MS provides another level of flexibility in forensic analysis, and has proven to be a sufficient detection method for trinitrotoluene (TNT), RDX, and 1-methylaminoanthraquinone (MAAQ). Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method
NASA Astrophysics Data System (ADS)
Chen, Leilei; Zheng, Changjun; Chen, Haibo
2013-09-01
This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.
Jafari, Mohammad T; Riahi, Farhad
2014-05-23
The capability of corona discharge ionization ion mobility spectrometry (CD-IMS) for direct analysis of the samples extracted by dispersive liquid-liquid microextraction (DLLME) was investigated and evaluated, for the first time. To that end, an appropriate new injection port was designed and constructed, resulting in possibility of direct injection of the known sample volume, without tedious sample preparation steps (e.g. derivatization, solvent evaporation, and re-solving in another solvent…). Malathion as a test compound was extracted from different matrices by a rapid and convenient DLLME method. The positive ion mobility spectra of the extracted malathion were obtained after direct injection of carbon tetrachloride or methanol solutions. The analyte responses were compared and the statistical results revealed the feasibility of direct analysis of the extracted samples in carbon tetrachloride, resulting in a convenient methodology. The coupled method of DLLME-CD-IMS was exhaustively validated in terms of sensitivity, dynamic range, recovery, and enrichment factor. Finally, various real samples of apple, river and underground water were analyzed, all verifying the feasibility and success of the proposed method for the easy extraction of the analyte using DLLME separation before the direct analysis by CD-IMS. Copyright © 2014 Elsevier B.V. All rights reserved.
A laboratory study of nonlinear changes in the directionality of extreme seas
NASA Astrophysics Data System (ADS)
Latheef, M.; Swan, C.; Spinneken, J.
2017-03-01
This paper concerns the description of surface water waves, specifically nonlinear changes in the directionality. Supporting calculations are provided to establish the best method of directional wave generation, the preferred method of directional analysis and the inputs on which such a method should be based. These calculations show that a random directional method, in which the phasing, amplitude and direction of propagation of individual wave components are chosen randomly, has benefits in achieving the required ergodicity. In terms of analysis procedures, the extended maximum entropy principle, with inputs based upon vector quantities, produces the best description of directionality. With laboratory data describing the water surface elevation and the two horizontal velocity components at a single point, several steep sea states are considered. The results confirm that, as the steepness of a sea state increases, the overall directionality of the sea state reduces. More importantly, it is also shown that the largest waves become less spread or more unidirectional than the sea state as a whole. This provides an important link to earlier descriptions of deterministic wave groups produced by frequency focusing, helps to explain recent field observations and has important practical implications for the design of marine structures and vessels.
NASA Technical Reports Server (NTRS)
Calkins, D. S.
1998-01-01
When the dependent (or response) variable response variable in an experiment has direction and magnitude, one approach that has been used for statistical analysis involves splitting magnitude and direction and applying univariate statistical techniques to the components. However, such treatment of quantities with direction and magnitude is not justifiable mathematically and can lead to incorrect conclusions about relationships among variables and, as a result, to flawed interpretations. This note discusses a problem with that practice and recommends mathematically correct procedures to be used with dependent variables that have direction and magnitude for 1) computation of mean values, 2) statistical contrasts of and confidence intervals for means, and 3) correlation methods.
Aging and Directed Forgetting in Episodic Memory: A Meta-Analysis
Titz, Cora; Verhaeghen, Paul
2009-01-01
This meta-analysis examines the effects of aging on directed forgetting. A cue to forget is more effective in younger (d = 1.17) than in older adults (d = 0.81). Directed-forgetting effects were larger: (a) with the item method rather than the list method; (b) with longer presentation times; (c) with longer postcue rehearsal times; (d) with single words rather than verbal action phrases as stimuli; (e) with shorter lists; and (f) when recall rather than recognition was tested. Age effects were reliably larger when the item method was used, suggesting that these effects are mainly due to encoding differences. PMID:20545424
NASA Astrophysics Data System (ADS)
Yang, Xiaojun; Lu, Dun; Liu, Hui; Zhao, Wanhua
2018-06-01
The complicated electromechanical coupling phenomena due to different kinds of causes have significant influences on the dynamic precision of the direct driven feed system in machine tools. In this paper, a novel integrated modeling and analysis method of the multiple electromechanical couplings for the direct driven feed system in machine tools is presented. At first, four different kinds of electromechanical coupling phenomena in the direct driven feed system are analyzed systematically. Then a novel integrated modeling and analysis method of the electromechanical coupling which is influenced by multiple factors is put forward. In addition, the effects of multiple electromechanical couplings on the dynamic precision of the feed system and their main influencing factors are compared and discussed, respectively. Finally, the results of modeling and analysis are verified by the experiments. It finds out that multiple electromechanical coupling loops, which are overlapped and influenced by each other, are the main reasons of the displacement fluctuations in the direct driven feed system.
Bayen, Stéphane; Yi, Xinzhu; Segovia, Elvagris; Zhou, Zhi; Kelly, Barry C
2014-04-18
Emerging contaminants such as antibiotics have received recent attention as they have been detected in natural waters and health concerns over potential antibiotic resistance. With the purpose to investigate fast and high-throughput analysis, and eventually the continuous on-line analysis of emerging contaminants, this study presents results on the analysis of seven selected antibiotics (sulfadiazine, sulfamethazine, sulfamerazine, sulfamethoxazole, chloramphenicol, lincomycin, tylosin) in surface freshwater and seawater using direct injection of a small sample volume (20μL) in liquid chromatography electrospray ionization tandem mass spectrometry (LC-ESI-MS/MS). Notably, direct injection of seawater in the LC-ESI-MS/MS was made possible on account of the post-column switch on the system, which allows diversion of salt-containing solutions flushed out of the column to the waste. Mean recoveries based on the isotope dilution method average 95±14% and 96±28% amongst the compounds for spiked freshwater and seawater, respectively. Linearity across six spiking levels was assessed and the response was linear (r(2)>0.99) for all compounds. Direct injection concentrations were compared for real samples to those obtained with the conventional SPE-based analysis and both techniques concurs on the presence/absence and levels of the compounds in real samples. These results suggest direct injection is a reliable method to detect antibiotics in both freshwater and seawater. Method detection limits for the direct injection technique (37pg/L to 226ng/L in freshwater, and from 16pg/to 26ng/L in seawater) are sufficient for a number of environmental applications, for example the fast screening of water samples for ecological risk assessments. In the present study of real samples, this new method allowed for example the positive detection of some compounds (e.g. lincomycin) down to the sub ng/L range. The direct injection method appears to be relatively cheaper and faster, requires a smaller sample size, and is more robust to equipment cross-contamination as compared to the conventional SPE-based method. Copyright © 2014 Elsevier B.V. All rights reserved.
Crews, C; Chiodini, A; Granvogl, M; Hamlet, C; Hrnčiřík, K; Kuhlmann, J; Lampen, A; Scholz, G; Weisshaar, R; Wenzl, T; Jasti, P R; Seefelder, W
2013-01-01
Esters of 2 - and 3-monochloropropane-1,2-diol (MCPD) and glycidol esters are important contaminants of processed edible oils used as foods or food ingredients. This review describes the occurrence and analysis of MCPD esters and glycidol esters in vegetable oils and some other foods. The focus is on the analytical methods based on both direct and indirect methods. Methods of analysis applied to oils and lipid extracts of foods have been based on transesterification to free MCPD and determination by gas chromatography-mass spectrometry (indirect methods) and by high-performance liquid chromatography-mass spectrometry (direct methods). The evolution and performance of the different methods is described and their advantages and disadvantages are discussed. The application of direct and indirect methods to the analysis of foods and to research studies is described. The metabolism and fate of MCPD esters and glycidol esters in biological systems and the methods used to study these in body tissues studies are described. A clear understanding of the chemistry of the methods is important when choosing those suitable for the desired application, and will contribute to the mitigation of these contaminants.
Classical methods and modern analysis for studying fungal diversity
John Paul Schmit
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
Classical Methods and Modern Analysis for Studying Fungal Diversity
J. P. Schmit; D. J. Lodge
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Kalluri, Sreeramesh
1991-01-01
The temperature-dependent engineering elastic constants of a directionally solidified nickel-base superalloy were estimated from the single-crystal elastic constants of nickel and MAR-MOO2 superalloy by using Wells' method. In this method, the directionally solidified (columnar-grained) nickel-base superalloy was modeled as a transversely isotropic material, and the five independent elastic constants of the transversely isotropic material were determined from the three independent elastic constants of a cubic single crystal. Solidification for both the single crystals and the directionally solidified superalloy was assumed to be along the (001) direction. Temperature-dependent Young's moduli in longitudinal and transverse directions, shear moduli, and Poisson's ratios were tabulated for the directionally solidified nickel-base superalloy. These engineering elastic constants could be used as input for performing finite element structural analysis of directionally solidified turbine engine components.
Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai
2016-04-01
We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
Skeletal Mechanism Generation of Surrogate Jet Fuels for Aeropropulsion Modeling
NASA Astrophysics Data System (ADS)
Sung, Chih-Jen; Niemeyer, Kyle E.
2010-05-01
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with skeletal reductions of two important hydrocarbon components, n-heptane and n-decane, relevant to surrogate jet fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each previous method, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal.
Duvivier, Wilco F; van Beek, Teris A; Pennings, Ed J M; Nielen, Michel W F
2014-04-15
Forensic hair analysis methods are laborious, time-consuming and provide only a rough retrospective estimate of the time of drug intake. Recently, hair imaging methods using matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) were reported, but these methods require the application of MALDI matrix and are performed under vacuum. Direct analysis of entire locks of hair without any sample pretreatment and with improved spatial resolution would thus address a need. Hair samples were attached to stainless steel mesh screens and scanned in the X-direction using direct analysis in real time (DART) ambient ionization orbitrap MS. The DART gas temperature and the accuracy of the probed hair zone were optimized using Δ-9-tetrahydrocannabinol (THC) as a model compound. Since external contamination is a major issue in forensic hair analysis, sub-samples were measured before and after dichloromethane decontamination. The relative intensity of the THC signal in spiked blank hair versus that of quinine as the internal standard showed good reproducibility (26% RSD) and linearity of the method (R(2) = 0.991). With the DART hair scan THC could be detected in hair samples from different chronic cannabis users. The presence of THC was confirmed by quantitative liquid chromatography/tandem mass spectrometry. Zones with different THC content could be clearly distinguished, indicating that the method might be used for retrospective timeline assessments. Detection of THC in decontaminated drug user hair showed that the DART hair scan not only probes THC on the surface of hair, but penetrates deeply enough to measure incorporated THC. A new approach in forensic hair analysis has been developed by probing complete locks of hair using DART-MS. Longitudinal scanning enables detection of incorporated compounds and can be used as pre-screening for THC without sample preparation. The method could also be adjusted for the analysis of other drugs of abuse. Copyright © 2014 John Wiley & Sons, Ltd.
Shah, Kumar A; Peoples, Michael C; Halquist, Matthew S; Rutan, Sarah C; Karnes, H Thomas
2011-01-25
The work described in this paper involves development of a high-throughput on-line microfluidic sample extraction method using capillary micro-columns packed with MIP beads coupled with tandem mass spectrometry for the analysis of urinary NNAL. The method was optimized and matrix effects were evaluated and resolved. The method enabled low sample volume (200 μL) and rapid analysis of urinary NNAL by direct injection onto the microfluidic column packed with molecularly imprinted beads engineered to NNAL. The method was validated according to the FDA bioanalytical method validation guidance. The dynamic range extended from 20.0 to 2500.0 pg/mL with a percent relative error of ±5.9% and a run time of 7.00 min. The lower limit of quantitation was 20.0 pg/mL. The method was used for the analysis of NNAL and NNAL-Gluc concentrations in smokers' urine. Copyright © 2010 Elsevier B.V. All rights reserved.
Sha, Zhichao; Liu, Zhengmeng; Huang, Zhitao; Zhou, Yiyu
2013-08-29
This paper addresses the problem of direction-of-arrival (DOA) estimation of multiple wideband coherent chirp signals, and a new method is proposed. The new method is based on signal component analysis of the array output covariance, instead of the complicated time-frequency analysis used in previous literatures, and thus is more compact and effectively avoids possible signal energy loss during the hyper-processes. Moreover, the a priori information of signal number is no longer a necessity for DOA estimation in the new method. Simulation results demonstrate the performance superiority of the new method over previous ones.
Coding and Commonality Analysis: Non-ANOVA Methods for Analyzing Data from Experiments.
ERIC Educational Resources Information Center
Thompson, Bruce
The advantages and disadvantages of three analytic methods used to analyze experimental data in educational research are discussed. The same hypothetical data set is used with all methods for a direct comparison. The Analysis of Variance (ANOVA) method and its several analogs are collectively labeled OVA methods and are evaluated. Regression…
Meta-analysis and The Cochrane Collaboration: 20 years of the Cochrane Statistical Methods Group
2013-01-01
The Statistical Methods Group has played a pivotal role in The Cochrane Collaboration over the past 20 years. The Statistical Methods Group has determined the direction of statistical methods used within Cochrane reviews, developed guidance for these methods, provided training, and continued to discuss and consider new and controversial issues in meta-analysis. The contribution of Statistical Methods Group members to the meta-analysis literature has been extensive and has helped to shape the wider meta-analysis landscape. In this paper, marking the 20th anniversary of The Cochrane Collaboration, we reflect on the history of the Statistical Methods Group, beginning in 1993 with the identification of aspects of statistical synthesis for which consensus was lacking about the best approach. We highlight some landmark methodological developments that Statistical Methods Group members have contributed to in the field of meta-analysis. We discuss how the Group implements and disseminates statistical methods within The Cochrane Collaboration. Finally, we consider the importance of robust statistical methodology for Cochrane systematic reviews, note research gaps, and reflect on the challenges that the Statistical Methods Group faces in its future direction. PMID:24280020
Improvements to direct quantitative analysis of multiple microRNAs facilitating faster analysis.
Ghasemi, Farhad; Wegman, David W; Kanoatov, Mirzo; Yang, Burton B; Liu, Stanley K; Yousef, George M; Krylov, Sergey N
2013-11-05
Studies suggest that patterns of deregulation in sets of microRNA (miRNA) can be used as cancer diagnostic and prognostic biomarkers. Establishing a "miRNA fingerprint"-based diagnostic technique requires a suitable miRNA quantitation method. The appropriate method must be direct, sensitive, capable of simultaneous analysis of multiple miRNAs, rapid, and robust. Direct quantitative analysis of multiple microRNAs (DQAMmiR) is a recently introduced capillary electrophoresis-based hybridization assay that satisfies most of these criteria. Previous implementations of the method suffered, however, from slow analysis time and required lengthy and stringent purification of hybridization probes. Here, we introduce a set of critical improvements to DQAMmiR that address these technical limitations. First, we have devised an efficient purification procedure that achieves the required purity of the hybridization probe in a fast and simple fashion. Second, we have optimized the concentrations of the DNA probe to decrease the hybridization time to 10 min. Lastly, we have demonstrated that the increased probe concentrations and decreased incubation time removed the need for masking DNA, further simplifying the method and increasing its robustness. The presented improvements bring DQAMmiR closer to use in a clinical setting.
Deng, Yuqiang; Yang, Weijian; Zhou, Chun; Wang, Xi; Tao, Jun; Kong, Weipeng; Zhang, Zhigang
2008-12-01
We propose and demonstrate an analysis method to directly extract the group delay rather than the phase from the white-light spectral interferogram. By the joint time-frequency analysis technique, group delay is directly read from the ridge of wavelet transform, and group-delay dispersion is easily obtained by additional differentiation. The technique shows reasonable potential for the characterization of ultra-broadband chirped mirrors.
Use of direct gradient analysis to uncover biological hypotheses in 16s survey data and beyond.
Erb-Downward, John R; Sadighi Akha, Amir A; Wang, Juan; Shen, Ning; He, Bei; Martinez, Fernando J; Gyetko, Margaret R; Curtis, Jeffrey L; Huffnagle, Gary B
2012-01-01
This study investigated the use of direct gradient analysis of bacterial 16S pyrosequencing surveys to identify relevant bacterial community signals in the midst of a "noisy" background, and to facilitate hypothesis-testing both within and beyond the realm of ecological surveys. The results, utilizing 3 different real world data sets, demonstrate the utility of adding direct gradient analysis to any analysis that draws conclusions from indirect methods such as Principal Component Analysis (PCA) and Principal Coordinates Analysis (PCoA). Direct gradient analysis produces testable models, and can identify significant patterns in the midst of noisy data. Additionally, we demonstrate that direct gradient analysis can be used with other kinds of multivariate data sets, such as flow cytometric data, to identify differentially expressed populations. The results of this study demonstrate the utility of direct gradient analysis in microbial ecology and in other areas of research where large multivariate data sets are involved.
Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.
1998-01-01
Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.
Simplified welding distortion analysis for fillet welding using composite shell elements
NASA Astrophysics Data System (ADS)
Kim, Mingyu; Kang, Minseok; Chung, Hyun
2015-09-01
This paper presents the simplified welding distortion analysis method to predict the welding deformation of both plate and stiffener in fillet welds. Currently, the methods based on equivalent thermal strain like Strain as Direct Boundary (SDB) has been widely used due to effective prediction of welding deformation. Regarding the fillet welding, however, those methods cannot represent deformation of both members at once since the temperature degree of freedom is shared at the intersection nodes in both members. In this paper, we propose new approach to simulate deformation of both members. The method can simulate fillet weld deformations by employing composite shell element and using different thermal expansion coefficients according to thickness direction with fixed temperature at intersection nodes. For verification purpose, we compare of result from experiments, 3D thermo elastic plastic analysis, SDB method and proposed method. Compared of experiments results, the proposed method can effectively predict welding deformation for fillet welds.
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-01-01
Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144
Simple gas chromatographic method for furfural analysis.
Gaspar, Elvira M S M; Lopes, João F
2009-04-03
A new, simple, gas chromatographic method was developed for the direct analysis of 5-hydroxymethylfurfural (5-HMF), 2-furfural (2-F) and 5-methylfurfural (5-MF) in liquid and water soluble foods, using direct immersion SPME coupled to GC-FID and/or GC-TOF-MS. The fiber (DVB/CAR/PDMS) conditions were optimized: pH effect, temperature, adsorption and desorption times. The method is simple and accurate (RSD<8%), showed good recoveries (77-107%) and good limits of detection (GC-FID: 1.37 microgL(-1) for 2-F, 8.96 microgL(-1) for 5-MF, 6.52 microgL(-1) for 5-HMF; GC-TOF-MS: 0.3, 1.2 and 0.9 ngmL(-1) for 2-F, 5-MF and 5-HMF, respectively). It was applied to different commercial food matrices: honey, white, demerara, brown and yellow table sugars, and white and red balsamic vinegars. This one-step, sensitive and direct method for the analysis of furfurals will contribute to characterise and quantify their presence in the human diet.
Direct dating of human fossils.
Grün, Rainer
2006-01-01
The methods that can be used for the direct dating of human remains comprise of radiocarbon, U-series, electron spin resonance (ESR), and amino acid racemization (AAR). This review gives an introduction to these methods in the context of dating human bones and teeth. Recent advances in ultrafiltration techniques have expanded the dating range of radiocarbon. It now seems feasible to reliably date bones up to 55,000 years. New developments in laser ablation mass spectrometry permit the in situ analysis of U-series isotopes, thus providing a rapid and virtually non-destructive dating method back to about 300,000 years. This is of particular importance when used in conjunction with non-destructive ESR analysis. New approaches in AAR analysis may lead to a renaissance of this method. The potential and present limitations of these direct dating techniques are discussed for sites relevant to the reconstruction of modern human evolution, including Florisbad, Border Cave, Tabun, Skhul, Qafzeh, Vindija, Banyoles, and Lake Mungo. (c) 2006 Wiley-Liss, Inc.
Doll, Charles G.; Wright, Cherylyn W.; Morley, Shannon M.; ...
2017-02-01
In this paper, a modified version of the Direct LSC method to correct for quenching effect was investigated for the determination of bio-originated fuel content in fuel samples produced from multiple biological starting materials. The modified method was found to be accurate in determining the percent bio-originated fuel to within 5% of the actual value for samples with quenching effects ≤43%. Finally, analysis of highly quenched samples was possible when diluted with the exception of one sample with a 100% quenching effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doll, Charles G.; Wright, Cherylyn W.; Morley, Shannon M.
A modified version of the Direct LSC method to correct for quenching effect was investigated for the determination of bio-originated fuel content in fuel samples produced from multiple biological starting materials. The modified method was found to be accurate in determining the percent bio-originated fuel to within 5% of the actual value for samples with quenching effects ≤43%. Analysis of highly quenched samples was possible when diluted with the exception of one sample with a 100% quenching effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doll, Charles G.; Wright, Cherylyn W.; Morley, Shannon M.
In this paper, a modified version of the Direct LSC method to correct for quenching effect was investigated for the determination of bio-originated fuel content in fuel samples produced from multiple biological starting materials. The modified method was found to be accurate in determining the percent bio-originated fuel to within 5% of the actual value for samples with quenching effects ≤43%. Finally, analysis of highly quenched samples was possible when diluted with the exception of one sample with a 100% quenching effect.
ANALYSIS OF VOLATILES AND SEMIVOLATILES BY DIRECT AQUEOUS INJECTION
Direct aqueous injection analysis (DAI) with gas chromatographic separation and ion trap mass spectral detection was used to analyze aqueous samples for g/L levels of 54 volatile and semivolatile compounds, and problematic non-purgeables and non-extractables. The method reduces ...
Experimental and simulation flow rate analysis of the 3/2 directional pneumatic valve
NASA Astrophysics Data System (ADS)
Blasiak, Slawomir; Takosoglu, Jakub E.; Laski, Pawel A.; Pietrala, Dawid S.; Zwierzchowski, Jaroslaw; Bracha, Gabriel; Nowakowski, Lukasz; Blasiak, Malgorzata
The work includes a study on the comparative analysis of two test methods. The first method - numerical method, consists in determining the flow characteristics with the use of ANSYS CFX. A modeled poppet directional valve 3/2 3D CAD software - SolidWorks was used for this purpose. Based on the solid model that was developed, simulation studies of the air flow through the way valve in the software for computational fluid dynamics Ansys CFX were conducted. The second method - experimental, entailed conducting tests on a specially constructed test stand. The comparison of the test results obtained on the basis of both methods made it possible to determine the cross-correlation. High compatibility of the results confirms the usefulness of the numerical procedures. Thus, they might serve to determine the flow characteristics of directional valves as an alternative to a costly and time-consuming test stand.
Structure analysis of polymerized phospholipid bilayer by TED and direct methods.
Stevens, M; Longo, M; Dorset, D L; Spence, J
2002-04-01
This paper describes the use of elastic energy filtered transmission electron diffraction combined with Direct Methods in order to study the structure of thin Langmuir-Blodgett films of a radiation sensitive diacetylene polymer (DC8.9PC). We obtain a potential map for one projection by direct phasing of zone axis patterns, and discuss experimental problems and possible solutions.
Nilsson, Björn; Håkansson, Petra; Johansson, Mikael; Nelander, Sven; Fioretos, Thoas
2007-01-01
Ontological analysis facilitates the interpretation of microarray data. Here we describe new ontological analysis methods which, unlike existing approaches, are threshold-free and statistically powerful. We perform extensive evaluations and introduce a new concept, detection spectra, to characterize methods. We show that different ontological analysis methods exhibit distinct detection spectra, and that it is critical to account for this diversity. Our results argue strongly against the continued use of existing methods, and provide directions towards an enhanced approach. PMID:17488501
Direct Analysis of Large Living Organism by Megavolt Electrostatic Ionization Mass Spectrometry
NASA Astrophysics Data System (ADS)
Ng, Kwan-Ming; Tang, Ho-Wai; Man, Sin-Heng; Mak, Pui-Yuk; Choi, Yi-Ching; Wong, Melody Yee-Man
2014-09-01
A new ambient ionization method allowing the direct chemical analysis of living human body by mass spectrometry (MS) was developed. This MS method, namely Megavolt Electrostatic Ionization Mass Spectrometry, is based on electrostatic charging of a living individual to megavolt (MV) potential, illicit drugs, and explosives on skin/glove, flammable solvent on cloth/tissue paper, and volatile food substances in breath were readily ionized and detected by a mass spectrometer.
Direct analysis of large living organism by megavolt electrostatic ionization mass spectrometry.
Ng, Kwan-Ming; Tang, Ho-Wai; Man, Sin-Heng; Mak, Pui-Yuk; Choi, Yi-Ching; Wong, Melody Yee-Man
2014-09-01
A new ambient ionization method allowing the direct chemical analysis of living human body by mass spectrometry (MS) was developed. This MS method, namely Megavolt Electrostatic Ionization Mass Spectrometry, is based on electrostatic charging of a living individual to megavolt (MV) potential, illicit drugs, and explosives on skin/glove, flammable solvent on cloth/tissue paper, and volatile food substances in breath were readily ionized and detected by a mass spectrometer.
Direct injection GC method for measuring light hydrocarbon emissions from cooling-tower water.
Lee, Max M; Logan, Tim D; Sun, Kefu; Hurley, N Spencer; Swatloski, Robert A; Gluck, Steve J
2003-12-15
A Direct Injection GC method for quantifying low levels of light hydrocarbons (C6 and below) in cooling water has been developed. It is intended to overcome the limitations of the currently available technology. The principle of this method is to use a stripper column in a GC to strip waterfrom the hydrocarbons prior to entering the separation column. No sample preparation is required since the water sample is introduced directly into the GC. Method validation indicates that the Direct Injection GC method offers approximately 15 min analysis time with excellent precision and recovery. The calibration studies with ethylene and propylene show that both liquid and gas standards are suitable for routine calibration and calibration verification. The sampling method using zero headspace traditional VOA (Volatile Organic Analysis) vials and a sample chiller has also been validated. It is apparent that the sampling method is sufficient to minimize the potential for losses of light hydrocarbons, and samples can be held at 4 degrees C for up to 7 days with more than 93% recovery. The Direct Injection GC method also offers <1 ppb (w/v) level method detection limits for ethylene, propylene, and benzene. It is superior to the existing El Paso stripper method. In addition to lower detection limits for ethylene and propylene, the Direct Injection GC method quantifies individual light hydrocarbons in cooling water, provides better recoveries, and requires less maintenance and setup costs. Since the instrumentation and supplies are readily available, this technique could easily be established as a standard or alternative method for routine emission monitoring and leak detection of light hydrocarbons in cooling-tower water.
Comparisons of Exploratory and Confirmatory Factor Analysis.
ERIC Educational Resources Information Center
Daniel, Larry G.
Historically, most researchers conducting factor analysis have used exploratory methods. However, more recently, confirmatory factor analytic methods have been developed that can directly test theory either during factor rotation using "best fit" rotation methods or during factor extraction, as with the LISREL computer programs developed…
2013-01-01
Background BRAF mutation is an important diagnostic and prognostic marker in patients with papillary thyroid carcinoma (PTC). To be applicable in clinical laboratories with limited equipment, diverse testing methods are required to detect BRAF mutation. Methods A shifted termination assay (STA) fragment analysis was used to detect common V600 BRAF mutations in 159 PTCs with DNAs extracted from formalin-fixed paraffin-embedded tumor tissue. The results of STA fragment analysis were compared to those of direct sequencing. Serial dilutions of BRAF mutant cell line (SNU-790) were used to calculate limit of detection (LOD). Results BRAF mutations were detected in 119 (74.8%) PTCs by STA fragment analysis. In direct sequencing, BRAF mutations were observed in 118 (74.2%) cases. The results of STA fragment analysis had high correlation with those of direct sequencing (p < 0.00001, κ = 0.98). The LOD of STA fragment analysis and direct sequencing was 6% and 12.5%, respectively. In PTCs with pT3/T4 stages, BRAF mutation was observed in 83.8% of cases. In pT1/T2 carcinomas, BRAF mutation was detected in 65.9% and this difference was statistically significant (p = 0.007). Moreover, BRAF mutation was more frequent in PTCs with extrathyroidal invasion than tumors without extrathyroidal invasion (84.7% versus 62.2%, p = 0.001). To prepare and run the reactions, direct sequencing required 450 minutes while STA fragment analysis needed 290 minutes. Conclusions STA fragment analysis is a simple and sensitive method to detect BRAF V600 mutations in formalin-fixed paraffin-embedded clinical samples. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5684057089135749 PMID:23883275
Higuchi Dimension of Digital Images
Ahammer, Helmut
2011-01-01
There exist several methods for calculating the fractal dimension of objects represented as 2D digital images. For example, Box counting, Minkowski dilation or Fourier analysis can be employed. However, there appear to be some limitations. It is not possible to calculate only the fractal dimension of an irregular region of interest in an image or to perform the calculations in a particular direction along a line on an arbitrary angle through the image. The calculations must be made for the whole image. In this paper, a new method to overcome these limitations is proposed. 2D images are appropriately prepared in order to apply 1D signal analyses, originally developed to investigate nonlinear time series. The Higuchi dimension of these 1D signals is calculated using Higuchi's algorithm, and it is shown that both regions of interests and directional dependencies can be evaluated independently of the whole picture. A thorough validation of the proposed technique and a comparison of the new method to the Fourier dimension, a common two dimensional method for digital images, are given. The main result is that Higuchi's algorithm allows a direction dependent as well as direction independent analysis. Actual values for the fractal dimensions are reliable and an effective treatment of regions of interests is possible. Moreover, the proposed method is not restricted to Higuchi's algorithm, as any 1D method of analysis, can be applied. PMID:21931854
Abdalla, Amir A; Smith, Robert E
2013-01-01
Mercury has been determined in Ayurvedic dietary supplements (Trifala, Trifala Guggulu, Turmeric, Mahasudarshan, Yograj, Shatawari, Hingwastika, Shatavari, and Shilajit) by inductively coupled plasma-mass spectrometry (ICP-MS) and direct mercury analysis using the Hydra-C direct mercury analyzer (Teledyne Leeman Labs Hudson, NH, USA). Similar results were obtained from the two methods, but the direct mercury analysis method was much faster and safer and required no microwave digestion (unlike ICP-MS). Levels of mercury ranged from 0.002 to 56 μ g/g in samples of dietary supplements. Standard reference materials Ephedra 3240 and tomato leaves that were from the National Institute of Standard and Technology (NIST) and dogfish liver (DOLT3) that was from the Canadian Research Council were analyzed using Hydra-C method. Average mercury recoveries were 102% (RSD% 0.0018), 100% (RSD% 0.0009), and 101% (RSD% 0.0729), respectively. Hydra-C method Limit Of Quantitation was 0.5 ng.
Abdalla, Amir A.; Smith, Robert E.
2013-01-01
Mercury has been determined in Ayurvedic dietary supplements (Trifala, Trifala Guggulu, Turmeric, Mahasudarshan, Yograj, Shatawari, Hingwastika, Shatavari, and Shilajit) by inductively coupled plasma-mass spectrometry (ICP-MS) and direct mercury analysis using the Hydra-C direct mercury analyzer (Teledyne Leeman Labs Hudson, NH, USA). Similar results were obtained from the two methods, but the direct mercury analysis method was much faster and safer and required no microwave digestion (unlike ICP-MS). Levels of mercury ranged from 0.002 to 56 μg/g in samples of dietary supplements. Standard reference materials Ephedra 3240 and tomato leaves that were from the National Institute of Standard and Technology (NIST) and dogfish liver (DOLT3) that was from the Canadian Research Council were analyzed using Hydra-C method. Average mercury recoveries were 102% (RSD% 0.0018), 100% (RSD% 0.0009), and 101% (RSD% 0.0729), respectively. Hydra-C method Limit Of Quantitation was 0.5 ng. PMID:23710181
Method for combined biometric and chemical analysis of human fingerprints.
Staymates, Jessica L; Orandi, Shahram; Staymates, Matthew E; Gillen, Greg
This paper describes a method for combining direct chemical analysis of latent fingerprints with subsequent biometric analysis within a single sample. The method described here uses ion mobility spectrometry (IMS) as a chemical detection method for explosives and narcotics trace contamination. A collection swab coated with a high-temperature adhesive has been developed to lift latent fingerprints from various surfaces. The swab is then directly inserted into an IMS instrument for a quick chemical analysis. After the IMS analysis, the lifted print remains intact for subsequent biometric scanning and analysis using matching algorithms. Several samples of explosive-laden fingerprints were successfully lifted and the explosives detected with IMS. Following explosive detection, the lifted fingerprints remained of sufficient quality for positive match scores using a prepared gallery consisting of 60 fingerprints. Based on our results ( n = 1200), there was no significant decrease in the quality of the lifted print post IMS analysis. In fact, for a small subset of lifted prints, the quality was improved after IMS analysis. The described method can be readily applied to domestic criminal investigations, transportation security, terrorist and bombing threats, and military in-theatre settings.
Himle, Michael B; Chang, Susanna; Woods, Douglas W; Pearlman, Amanda; Buzzella, Brian; Bunaciu, Liviu; Piacentini, John C
2006-01-01
Behavior analysis has been at the forefront in establishing effective treatments for children and adults with chronic tic disorders. As is customary in behavior analysis, the efficacy of these treatments has been established using direct-observation assessment methods. Although behavior-analytic treatments have enjoyed acceptance and integration into mainstream health care practices for tic disorders (e.g., psychiatry and neurology), the use of direct observation as a primary assessment tool has been neglected in favor of less objective methods. Hesitation to use direct observation appears to stem largely from concerns about the generalizability of clinic observations to other settings (e.g., home) and a lack of consensus regarding the most appropriate and feasible techniques for conducting and scoring direct observation. The purpose of the current study was to evaluate and establish a reliable, valid, and feasible direct-observation protocol capable of being transported to research and clinical settings. A total of 43 children with tic disorders, collected from two outpatient specialty clinics, were assessed using direct (videotape samples) and indirect (Yale Global Tic Severity Scale; YGTSS) methods. Videotaped observation samples were collected across 3 consecutive weeks and two different settings (clinic and home), were scored using both exact frequency counts and partial-interval coding, and were compared to data from a common indirect measure of tic severity (the YGTSS). In addition, various lengths of videotaped segments were scored to determine the optimal observation length. Results show that (a) clinic-based observations correspond well to home-based observations, (b) brief direct-observation segments scored with time-sampling methods reliably quantified tics, and (c) indirect methods did not consistently correspond with the direct methods.
Tsai, Chung-Yu
2017-07-01
A refractive laser beam shaper comprising two free-form profiles is presented. The profiles are designed using a free-form profile construction method such that each incident ray is directed in a certain user-specified direction or to a particular point on the target surface so as to achieve the required illumination distribution of the output beam. The validity of the proposed design method is demonstrated by means of ZEMAX simulations. The method is mathematically straightforward and easily implemented in computer code. It thus provides a convenient tool for the design and sensitivity analysis of laser beam shapers and similar optical components.
Correlation between polar values and vector analysis.
Naeser, K; Behrens, J K
1997-01-01
To evaluate the possible correlation between polar value and vector analysis assessment of surgically induced astigmatism. Department of Ophthalmology, Aalborg Sygehus Syd, Denmark. The correlation between polar values and vector analysis was evaluated by simple mathematical and optical methods using accepted principles of trigonometry and first-order optics. Vector analysis and polar values report different aspects of surgically induced astigmatism. Vector analysis describes the total astigmatic change, characterized by both astigmatic magnitude and direction, while the polar value method produces a single, reduced figure that reports flattening or steepening in preselected directions, usually the plane of the surgical meridian. There is a simple Pythagorean correlation between vector analysis and two polar values separated by an arch of 45 degrees. The polar value calculated in the surgical meridian indicates the power or the efficacy of the surgical procedure. The polar value calculated in a plane inclined 45 degrees to the surgical meridian indicates the degree of cylinder rotation induced by surgery. These two polar values can be used to obtain other relevant data such as magnitude, direction, and sphere of an induced cylinder. Consistent use of these methods will enable surgeons to control and in many cases reduce preoperative astigmatism.
NASA Astrophysics Data System (ADS)
Oleksik, Mihaela; Oleksik, Valentin
2013-05-01
The current paper intends to realise a fast method for determining the material characteristics in the case of composite materials used in the airbags manufacturing. For determining the material data needed for other complex numerical simulations at macroscopic level there was used the inverse analysis method. In fact, there were carried out tensile tests for the composite material extracted along two directions - the direction of the weft and the direction of the warp and afterwards there were realised numerical simulations (using the Ls-Dyna software). A second stage consisted in the numerical simulation through the finite element method and the experimental testing for the Bias test. The material characteristics of the composite fabric material were then obtained by applying a multicriterial analysis using the Ls-Opt software, for which there was imposed a decrease of the mismatch between the force-displacement curves obtained numerically and experimentally, respectively, for both directions (weft and warp) as well as the decrease of the mismatch between the strain - extension curves for two points at the Bias test.
Direct application of Padé approximant for solving nonlinear differential equations.
Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario
2014-01-01
This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. Dale, Jr.
1990-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number of operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. D., Jr.
1992-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number od operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
NASA Technical Reports Server (NTRS)
An, S. H.; Yao, K.
1986-01-01
Lattice algorithm has been employed in numerous adaptive filtering applications such as speech analysis/synthesis, noise canceling, spectral analysis, and channel equalization. In this paper the application to adaptive-array processing is discussed. The advantages are fast convergence rate as well as computational accuracy independent of the noise and interference conditions. The results produced by this technique are compared to those obtained by the direct matrix inverse method.
Li, Xianjiang; Wang, Xin; Ma, Wen; Ai, Wanpeng; Bai, Yu; Ding, Li; Liu, Huwei
2017-04-01
Glycosides are a kind of highly important natural aromatic precursors in tobacco leaves. In this study, a novel HKUST-1-coated monolith dip-it sampler was designed for the fast and sensitive analysis of trace glycosides using direct analysis in real-time mass spectrometry. This device was prepared in two steps: in situ polymerization of monolith in a glass capillary of dip-it and layer-by-layer growth of HKUST-1 on the surface of monolith. Sufficient extraction was realized by immersing the tip to solution and in situ desorption was carried out by plasma direct analysis in real time. Compared with traditional solid-phase microextraction protocols, sample desorption was not needed anymore, and only extraction conditions were needed to be optimized in this method, including the gas temperature of direct analysis in real time, extraction time, and CH 3 COONH 4 additive concentration. This method enabled the simultaneous detection of six kinds of glycosides with the limits of detection of 0.02-0.05 μg/mL and the linear ranges covering two orders of magnitude with the limits of quantitation of 0.05-0.1 μg/mL. Moreover, the developed method was applied for the glycosides analysis of three tobacco samples, which only took about 2 s for every sample. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-11-26
Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.
Siegert, F; Weijer, C J; Nomura, A; Miike, H
1994-01-01
We describe the application of a novel image processing method, which allows quantitative analysis of cell and tissue movement in a series of digitized video images. The result is a vector velocity field showing average direction and velocity of movement for every pixel in the frame. We apply this method to the analysis of cell movement during different stages of the Dictyostelium developmental cycle. We analysed time-lapse video recordings of cell movement in single cells, mounds and slugs. The program can correctly assess the speed and direction of movement of either unlabelled or labelled cells in a time series of video images depending on the illumination conditions. Our analysis of cell movement during multicellular development shows that the entire morphogenesis of Dictyostelium is characterized by rotational cell movement. The analysis of cell and tissue movement by the velocity field method should be applicable to the analysis of morphogenetic processes in other systems such as gastrulation and neurulation in vertebrate embryos.
Doll, Charles G; Wright, Cherylyn W; Morley, Shannon M; Wright, Bob W
2017-04-01
A modified version of the Direct LSC method to correct for quenching effect was investigated for the determination of bio-originated fuel content in fuel samples produced from multiple biological starting materials. The modified method was found to be accurate in determining the percent bio-originated fuel to within 5% of the actual value for samples with quenching effects ≤43%. Analysis of highly quenched samples was possible when diluted with the exception of one sample with a 100% quenching effect. Copyright © 2017. Published by Elsevier Ltd.
USDA-ARS?s Scientific Manuscript database
Ambient desorption ionization techniques, such as laser desorption with electrospray ionization assistance (ELDI), direct analysis in real time (DART) and desorption electrospray ionization (DESI) have been developed as alternatives to traditional mass spectrometric-based methods. Such techniques al...
18 CFR 301.7 - Average System Cost methodology functionalization.
Code of Federal Regulations, 2010 CFR
2010-04-01
... SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER... functionalization under its Direct Analysis assigns costs, revenues, debits or credits based upon the actual and/or...) Functionalization methods. (1) Direct analysis, if allowed or required by Table 1, assigns costs, revenues, debits...
Evaluating the Risks of Clinical Research: Direct Comparative Analysis
Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S.; Wendler, David
2014-01-01
Abstract Objectives: Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed “risks of daily life” standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. Methods: This study employed a conceptual and normative analysis, and use of an illustrative example. Results: Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the “risks of daily life” standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Conclusions: Direct comparative analysis is a systematic method for applying the “risks of daily life” standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about research risks. PMID:25210944
Deformation analysis of MEMS structures by modified digital moiré methods
NASA Astrophysics Data System (ADS)
Liu, Zhanwei; Lou, Xinhao; Gao, Jianxin
2010-11-01
Quantitative deformation analysis of micro-fabricated electromechanical systems is of importance for the design and functional control of microsystems. In this paper, two modified digital moiré processing methods, Gaussian blurring algorithm combined with digital phase shifting and geometrical phase analysis (GPA) technique based on digital moiré method, are developed to quantitatively analyse the deformation behaviour of micro-electro-mechanical system (MEMS) structures. Measuring principles and experimental procedures of the two methods are described in detail. A digital moiré fringe pattern is generated by superimposing a specimen grating etched directly on a microstructure surface with a digital reference grating (DRG). Most of the grating noise is removed from the digital moiré fringes, which enables the phase distribution of the moiré fringes to be obtained directly. Strain measurement result of a MEMS structure demonstrates the feasibility of the two methods.
How to Compare the Security Quality Requirements Engineering (SQUARE) Method with Other Methods
2007-08-01
Attack Trees for Modeling and Analysis 10 2.8 Misuse and Abuse Cases 10 2.9 Formal Methods 11 2.9.1 Software Cost Reduction 12 2.9.2 Common...modern or efficient techniques. • Requirements analysis typically is either not performed at all (identified requirements are directly specified without...any analysis or modeling) or analysis is restricted to functional re- quirements and ignores quality requirements, other nonfunctional requirements
[Individual Identification of Cartilage by Direct Amplification in Mass Disasters].
Wang, C H; Xu, C; Li, X Q; Wu, Y; Du, Z
2017-06-01
To explore the effectiveness of direct amplification for the STR analysis of cartilage, and to accelerate the effectiveness of disaster victim identification. Eighty-eight cartilage samples were directly amplified by PowerPle® 21 kit, and the results of genotyping were compared with that obtained by the magnetic beads method. In 88 cartilage samples, the STR genotypes were successfully detected from 84 samples by direct amplification and magnetic beads method, and both the results of genotyping by two method were consistent. Direct amplification with PowerPlex® 21 kit can be used for STR genotyping of cartilages. This method is operated easily and promptly, which has a potential application in the individual identification of mass disasters. Copyright© by the Editorial Department of Journal of Forensic Medicine
Haines, Troy D.; Adlaf, Kevin J.; Pierceall, Robert M.; Lee, Inmok; Venkitasubramanian, Padmesh
2010-01-01
Analysis of MCPD esters and glycidyl esters in vegetable oils using the indirect method proposed by the DGF gave inconsistent results when salting out conditions were varied. Subsequent investigation showed that the method was destroying and reforming MCPD during the analysis. An LC time of flight MS method was developed for direct analysis of both MCPD esters and glycidyl esters in vegetable oils. The results of the LC–TOFMS method were compared with the DGF method. The DGF method consistently gave results that were greater than the LC–TOFMS method. The levels of MCPD esters and glycidyl esters found in a variety of vegetable oils are reported. MCPD monoesters were not found in any oil samples. MCPD diesters were found only in samples containing palm oil, and were not present in all palm oil samples. Glycidyl esters were found in a wide variety of oils. Some processing conditions that influence the concentration of MCPD esters and glycidyl esters are discussed. PMID:21350591
NASA Astrophysics Data System (ADS)
Li, Xue; Hou, Guangyue; Xing, Junpeng; Song, Fengrui; Liu, Zhiqiang; Liu, Shuying
2014-12-01
In the present work, direct analysis of real time ionization combined with multi-stage tandem mass spectrometry (DART-MSn) was used to investigate the metabolic profile of aconite alkaloids in rat intestinal bacteria. A total of 36 metabolites from three aconite alkaloids were identified by using DART-MSn, and the feasibility of quantitative analysis of these analytes was examined. Key parameters of the DART ion source, such as helium gas temperature and pressure, the source-to-MS distance, and the speed of the autosampler, were optimized to achieve high sensitivity, enhance reproducibility, and reduce the occurrence of fragmentation. The instrument analysis time for one sample can be less than 10 s for this method. Compared with ESI-MS and UPLC-MS, the DART-MS is more efficient for directly detecting metabolic samples, and has the advantage of being a simple, high-speed, high-throughput method.
Li, Xue; Hou, Guangyue; Xing, Junpeng; Song, Fengrui; Liu, Zhiqiang; Liu, Shuying
2014-12-01
In the present work, direct analysis of real time ionization combined with multi-stage tandem mass spectrometry (DART-MS(n)) was used to investigate the metabolic profile of aconite alkaloids in rat intestinal bacteria. A total of 36 metabolites from three aconite alkaloids were identified by using DART-MS(n), and the feasibility of quantitative analysis of these analytes was examined. Key parameters of the DART ion source, such as helium gas temperature and pressure, the source-to-MS distance, and the speed of the autosampler, were optimized to achieve high sensitivity, enhance reproducibility, and reduce the occurrence of fragmentation. The instrument analysis time for one sample can be less than 10 s for this method. Compared with ESI-MS and UPLC-MS, the DART-MS is more efficient for directly detecting metabolic samples, and has the advantage of being a simple, high-speed, high-throughput method.
NASA Astrophysics Data System (ADS)
Vinh, T.
1980-08-01
There is a need for better and more effective lightning protection for transmission and switching substations. In the past, a number of empirical methods were utilized to design systems to protect substations and transmission lines from direct lightning strokes. The need exists for convenient analytical lightning models adequate for engineering usage. In this study, analytical lightning models were developed along with a method for improved analysis of the physical properties of lightning through their use. This method of analysis is based upon the most recent statistical field data. The result is an improved method for predicting the occurrence of sheilding failure and for designing more effective protection for high and extra high voltage substations from direct strokes.
Ye, Diru; Wu, Susu; Xu, Jianqiao; Jiang, Ruifen; Zhu, Fang; Ouyang, Gangfeng
2016-02-01
Direct immersion solid-phase microextraction (DI-SPME) coupled with gas chromatography-mass spectrometry (GC-MS) was developed for rapid analysis of clenbuterol in pork for the first time. In this work, a low-cost homemade 44 µm polydimethylsiloxane (PDMS) SPME fiber was employed to extract clenbuterol in pork. After extraction, derivatization was performed by suspending the fiber in the headspace of the 2 mL sample vial saturated with a vapor of 100 µL hexamethyldisilazane. Lastly, the fiber was directly introduced to GC-MS for analysis. All parameters that influenced absorption (extraction time), derivatization (derivatization reagent, time and temperature) and desorption (desorption time) were optimized. Under optimized conditions, the method offered a wide linear range (10-1000 ng g(-1)) and a low detection limit (3.6 ng g(-1)). Finally, the method was successfully applied in the analysis of pork from the market, and recoveries of the method for spiked pork were 97.4-105.7%. Compared with the traditional solvent extraction method, the proposed method was much cheaper and fast. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Direct digestion of proteins in living cells into peptides for proteomic analysis.
Chen, Qi; Yan, Guoquan; Gao, Mingxia; Zhang, Xiangmin
2015-01-01
To analyze the proteome of an extremely low number of cells or even a single cell, we established a new method of digesting whole cells into mass-spectrometry-identifiable peptides in a single step within 2 h. Our sampling method greatly simplified the processes of cell lysis, protein extraction, protein purification, and overnight digestion, without compromising efficiency. We used our method to digest hundred-scale cells. As far as we know, there is no report of proteome analysis starting directly with as few as 100 cells. We identified an average of 109 proteins from 100 cells, and with three replicates, the number of proteins rose to 204. Good reproducibility was achieved, showing stability and reliability of the method. Gene Ontology analysis revealed that proteins in different cellular compartments were well represented.
Comparison of haemoglobin estimates using direct & indirect cyanmethaemoglobin methods.
Bansal, Priyanka Gupta; Toteja, Gurudayal Singh; Bhatia, Neena; Gupta, Sanjeev; Kaur, Manpreet; Adhikari, Tulsi; Garg, Ashok Kumar
2016-10-01
Estimation of haemoglobin is the most widely used method to assess anaemia. Although direct cyanmethaemoglobin method is the recommended method for estimation of haemoglobin, but it may not be feasible under field conditions. Hence, the present study was undertaken to compare indirect cyanmethaemoglobin method against the conventional direct method for haemoglobin estimation. Haemoglobin levels were estimated for 888 adolescent girls aged 11-18 yr residing in an urban slum in Delhi by both direct and indirect cyanmethaemoglobin methods, and the results were compared. The mean haemoglobin levels for 888 whole blood samples estimated by direct and indirect cyanmethaemoglobin method were 116.1 ± 12.7 and 110.5 ± 12.5 g/l, respectively, with a mean difference of 5.67 g/l (95% confidence interval: 5.45 to 5.90, P<0.001); which is equivalent to 0.567 g%. The prevalence of anaemia was reported as 59.6 and 78.2 per cent by direct and indirect methods, respectively. Sensitivity and specificity of indirect cyanmethaemoglobin method were 99.2 and 56.4 per cent, respectively. Using regression analysis, prediction equation was developed for indirect haemoglobin values. The present findings revealed that indirect cyanmethaemoglobin method overestimated the prevalence of anaemia as compared to the direct method. However, if a correction factor is applied, indirect method could be successfully used for estimating true haemoglobin level. More studies should be undertaken to establish agreement and correction factor between direct and indirect cyanmethaemoglobin methods.
Spectrometer Sensitivity Investigations on the Spectrometric Oil Analysis Program.
1983-04-22
31 H. ACID DISSOLUTION METHOD (ADM) ........... 90 31 I. ANALYSIS OF SAMPLES............................ 31 jJ. PARTICLE TRANSPORT EFFICIENCY OF...THE ROTATING *DISK.................................... 32 I .K. A/E35U-3 ACID DISSOLUTION METHOD.................. 32 L. BURN TIME... ACID DISSOLUTION METHOD ......... ,...,....... 95 3. EFFECT OF BURN TIME ............ 95 4. DIRECT SAMPLE INTRODUCTION .......................... 95
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolski, M., E-mail: marcin.wolski@curtin.edu.au; Podsiadlo, P.; Stachowiak, G. W.
Purpose: To develop directional fractal signature methods for the analysis of trabecular bone (TB) texture in hand radiographs. Problems associated with the small size of hand bones and the orientation of fingers were addressed. Methods: An augmented variance orientation transform (AVOT) and a quadrant rotating grid (QRG) methods were developed. The methods calculate fractal signatures (FSs) in different directions. Unlike other methods they have the search region adjusted according to the size of bone region of interest (ROI) to be analyzed and they produce FSs defined with respect to any chosen reference direction, i.e., they work for arbitrary orientation ofmore » fingers. Five parameters at scales ranging from 2 to 14 pixels (depending on image size and method) were derived from rose plots of Hurst coefficients, i.e., FS in dominating roughness (FS{sub Sta}), vertical (FS{sub V}) and horizontal (FS{sub H}) directions, aspect ratio (StrS), and direction signatures (StdS), respectively. The accuracy in measuring surface roughness and isotropy/anisotropy was evaluated using 3600 isotropic and 800 anisotropic fractal surface images of sizes between 20 × 20 and 64 × 64 pixels. The isotropic surfaces had FDs ranging from 2.1 to 2.9 in steps of 0.1, and the anisotropic surfaces had two dominating directions of 30° and 120°. The methods were used to find differences in hand TB textures between 20 matched pairs of subjects with (cases: approximate Kellgren-Lawrence (KL) grade ≥2) and without (controls: approximate KL grade <2) radiographic hand osteoarthritis (OA). The OA Initiative public database was used and 20 × 20 pixel bone ROIs were selected on 5th distal and middle phalanges. The performance of the AVOT and QRG methods was compared against a variance orientation transform (VOT) method developed earlier [M. Wolski, P. Podsiadlo, and G. W. Stachowiak, “Directional fractal signature analysis of trabecular bone: evaluation of different methods to detect early osteoarthritis in knee radiographs,” Proc. Inst. Mech. Eng., Part H 223, 211–236 (2009)]. Results: The AVOT method correctly quantified the isotropic and anisotropic surfaces for all image sizes and scales. Values of FS{sub Sta} were significantly different (P < 0.05) between the isotropic surfaces. Using the VOT and QRG methods no differences were found at large scales for the isotropic surfaces that are smaller than 64 × 64 and 48 × 48 pixels, respectively, and at some scales for the anisotropic surfaces with size 48 × 48 pixels. Compared to controls, using the AVOT and QRG methods the authors found that OA TB textures were less rough (P < 0.05) in the dominating and horizontal directions (i.e., lower FS{sub Sta} and FS{sub H}), rougher in the vertical direction (i.e., higher FS{sub V}) and less anisotropic (i.e., higher StrS) than controls. No differences were found using the VOT method. Conclusions: The AVOT method is well suited for the analysis of bone texture in hand radiographs and it could be potentially useful for early detection and prediction of hand OA.« less
Field, Christopher R.; Lubrano, Adam; Woytowitz, Morgan; Giordano, Braden C.; Rose-Pehrsson, Susan L.
2014-01-01
The direct liquid deposition of solution standards onto sorbent-filled thermal desorption tubes is used for the quantitative analysis of trace explosive vapor samples. The direct liquid deposition method yields a higher fidelity between the analysis of vapor samples and the analysis of solution standards than using separate injection methods for vapors and solutions, i.e., samples collected on vapor collection tubes and standards prepared in solution vials. Additionally, the method can account for instrumentation losses, which makes it ideal for minimizing variability and quantitative trace chemical detection. Gas chromatography with an electron capture detector is an instrumentation configuration sensitive to nitro-energetics, such as TNT and RDX, due to their relatively high electron affinity. However, vapor quantitation of these compounds is difficult without viable vapor standards. Thus, we eliminate the requirement for vapor standards by combining the sensitivity of the instrumentation with a direct liquid deposition protocol to analyze trace explosive vapor samples. PMID:25145416
Field, Christopher R; Lubrano, Adam; Woytowitz, Morgan; Giordano, Braden C; Rose-Pehrsson, Susan L
2014-07-25
The direct liquid deposition of solution standards onto sorbent-filled thermal desorption tubes is used for the quantitative analysis of trace explosive vapor samples. The direct liquid deposition method yields a higher fidelity between the analysis of vapor samples and the analysis of solution standards than using separate injection methods for vapors and solutions, i.e., samples collected on vapor collection tubes and standards prepared in solution vials. Additionally, the method can account for instrumentation losses, which makes it ideal for minimizing variability and quantitative trace chemical detection. Gas chromatography with an electron capture detector is an instrumentation configuration sensitive to nitro-energetics, such as TNT and RDX, due to their relatively high electron affinity. However, vapor quantitation of these compounds is difficult without viable vapor standards. Thus, we eliminate the requirement for vapor standards by combining the sensitivity of the instrumentation with a direct liquid deposition protocol to analyze trace explosive vapor samples.
A compilation and analysis of helicopter handling qualities data. Volume 2: Data analysis
NASA Technical Reports Server (NTRS)
Heffley, R. K.
1979-01-01
A compilation and an analysis of helicopter handling qualities data are presented. Multiloop manual control methods are used to analyze the descriptive data, stability derivatives, and transfer functions for a six degrees of freedom, quasi static model. A compensatory loop structure is applied to coupled longitudinal, lateral and directional equations in such a way that key handling qualities features are examined directly.
Janet L. Ohmann; Matthew J. Gregory
2002-01-01
Spatially explicit information on the species composition and structure of forest vegetation is needed at broad spatial scales for natural resource policy analysis and ecological research. We present a method for predictive vegetation mapping that applies direct gradient analysis and nearest-neighbor imputation to ascribe detailed ground attributes of vegetation to...
NASA Astrophysics Data System (ADS)
Manicke, Nicholas E.; Belford, Michael
2015-05-01
One limitation in the growing field of ambient or direct analysis methods is reduced selectivity caused by the elimination of chromatographic separations prior to mass spectrometric analysis. We explored the use of high-field asymmetric waveform ion mobility spectrometry (FAIMS), an ambient pressure ion mobility technique, to separate the closely related opiate isomers of morphine, hydromorphone, and norcodeine. These isomers cannot be distinguished by tandem mass spectrometry. Separation prior to MS analysis is, therefore, required to distinguish these compounds, which are important in clinical chemistry and toxicology. FAIMS was coupled to a triple quadrupole mass spectrometer, and ionization was performed using either a pneumatically assisted heated electrospray ionization source (H-ESI) or paper spray, a direct analysis method that has been applied to the direct analysis of dried blood spots and other complex samples. We found that FAIMS was capable of separating the three opiate structural isomers using both H-ESI and paper spray as the ionization source.
Interpreting findings from Mendelian randomization using the MR-Egger method.
Burgess, Stephen; Thompson, Simon G
2017-05-01
Mendelian randomization-Egger (MR-Egger) is an analysis method for Mendelian randomization using summarized genetic data. MR-Egger consists of three parts: (1) a test for directional pleiotropy, (2) a test for a causal effect, and (3) an estimate of the causal effect. While conventional analysis methods for Mendelian randomization assume that all genetic variants satisfy the instrumental variable assumptions, the MR-Egger method is able to assess whether genetic variants have pleiotropic effects on the outcome that differ on average from zero (directional pleiotropy), as well as to provide a consistent estimate of the causal effect, under a weaker assumption-the InSIDE (INstrument Strength Independent of Direct Effect) assumption. In this paper, we provide a critical assessment of the MR-Egger method with regard to its implementation and interpretation. While the MR-Egger method is a worthwhile sensitivity analysis for detecting violations of the instrumental variable assumptions, there are several reasons why causal estimates from the MR-Egger method may be biased and have inflated Type 1 error rates in practice, including violations of the InSIDE assumption and the influence of outlying variants. The issues raised in this paper have potentially serious consequences for causal inferences from the MR-Egger approach. We give examples of scenarios in which the estimates from conventional Mendelian randomization methods and MR-Egger differ, and discuss how to interpret findings in such cases.
Direct Care Workers' Experiences of Grief and Needs for Support
ERIC Educational Resources Information Center
Gray, Jennifer A.; Kim, Jinsook
2017-01-01
Background: A paucity of information is available on direct care workers' (DCWs') experiences with loss when their clients (people with intellectual and developmental disabilities [I/DD]) die. This study explored DCWs' grief experiences, their coping methods and their needs for support. Methods: A thematic analysis approach was used to examine…
Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal
2017-11-24
Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.
ADVANCEMENTS IN TIME-SPECTRA ANALYSIS METHODS FOR LEAD SLOWING-DOWN SPECTROSCOPY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Leon E.; Anderson, Kevin K.; Gesh, Christopher J.
2010-08-11
Direct measurement of Pu in spent nuclear fuel remains a key challenge for safeguarding nuclear fuel cycles of today and tomorrow. Lead slowing-down spectroscopy (LSDS) is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic mass with an uncertainty lower than the approximately 10 percent typical of today’s confirmatory assay methods. Pacific Northwest National Laboratory’s (PNNL) previous work to assess the viability of LSDS for the assay of pressurized water reactor (PWR) assemblies indicated that the method could provide direct assay of Pu-239 and U-235 (and possibly Pu-240 and Pu-241)more » with uncertainties less than a few percent, assuming suitably efficient instrumentation, an intense pulsed neutron source, and improvements in the time-spectra analysis methods used to extract isotopic information from a complex LSDS signal. This previous simulation-based evaluation used relatively simple PWR fuel assembly definitions (e.g. constant burnup across the assembly) and a constant initial enrichment and cooling time. The time-spectra analysis method was founded on a preliminary analytical model of self-shielding intended to correct for assay-signal nonlinearities introduced by attenuation of the interrogating neutron flux within the assembly.« less
School Connectedness and Chinese Adolescents' Sleep Problems: A Cross-Lagged Panel Analysis
ERIC Educational Resources Information Center
Bao, Zhenzhou; Chen, Chuansheng; Zhang, Wei; Jiang, Yanping; Zhu, Jianjun; Lai, Xuefen
2018-01-01
Background: Although previous research indicates an association between school connectedness and adolescents' sleep quality, its causal direction has not been determined. This study used a 2-wave cross-lagged panel analysis to explore the likely causal direction between these 2 constructs. Methods: Participants were 888 Chinese adolescents (43.80%…
An evaluation of a reagentless method for the determination of total mercury in aquatic life
Haynes, Sekeenia; Gragg, Richard D.; Johnson, Elijah; Robinson, Larry; Orazio, Carl E.
2006-01-01
Multiple treatment (i.e., drying, chemical digestion, and oxidation) steps are often required during preparation of biological matrices for quantitative analysis of mercury; these multiple steps could potentially lead to systematic errors and poor recovery of the analyte. In this study, the Direct Mercury Analyzer (Milestone Inc., Monroe, CT) was utilized to measure total mercury in fish tissue by integrating steps of drying, sample combustion and gold sequestration with successive identification using atomic absorption spectrometry. We also evaluated the differences between the mercury concentrations found in samples that were homogenized and samples with no preparation. These results were confirmed with cold vapor atomic absorbance and fluorescence spectrometric methods of analysis. Finally, total mercury in wild captured largemouth bass (n = 20) were assessed using the Direct Mercury Analyzer to examine internal variability between mercury concentrations in muscle, liver and brain organs. Direct analysis of total mercury measured in muscle tissue was strongly correlated with muscle tissue that was homogenized before analysis (r = 0.81, p < 0.0001). Additionally, results using this integrated method compared favorably (p < 0.05) with conventional cold vapor spectrometry with atomic absorbance and fluorescence detection methods. Mercury concentrations in brain were significantly lower than concentrations in muscle (p < 0.001) and liver (p < 0.05) tissues. This integrated method can measure a wide range of mercury concentrations (0-500 ??g) using small sample sizes. Total mercury measurements in this study are comparative to the methods (cold vapor) commonly used for total mercury analysis and are devoid of laborious sample preparation and expensive hazardous waste. ?? Springer 2006.
Quantitative Assessment of Knee Progression Angle During Gait in Children With Cerebral Palsy.
Davids, Jon R; Cung, Nina Q; Pomeroy, Robin; Schultz, Brooke; Torburn, Leslie; Kulkarni, Vedant A; Brown, Sean; Bagley, Anita M
2018-04-01
Abnormal hip rotation is a common deviation in children with cerebral palsy (CP). Clinicians typically assess hip rotation during gait by observing the direction that the patella points relative to the path of walking, which is referred to as the knee progression angle (KPA). Two kinematic methods for calculating the KPA are compared with each other. Video-based qualitative assessment of KPA is compared with the quantitative methods to determine reliability and validity. The KPA was calculated by both direct and indirect methods for 32 typically developing (TD) children and a convenience cohort of 43 children with hemiplegic type CP. An additional convenience cohort of 26 children with hemiplegic type CP was selected for qualitative assessment of KPA, performed by 3 experienced clinicians, using 3 categories (internal, >10 degrees; neutral, -10 to 10 degrees; and external, >-10 degrees). Root mean square (RMS) analysis comparing the direct and indirect KPAs was 1.14+0.43 degrees for TD children, and 1.75+1.54 degrees for the affected side of children with CP. The difference in RMS among the 2 groups was statistically, but not clinically, significant (P=0.019). Intraclass correlation coefficient revealed excellent agreement between the direct and indirect methods of KPA for TD and CP children (0.996 and 0.992, respectively; P<0.001).For the qualitative assessment of KPA there was complete agreement among all examiners for 17 of 26 cases (65%). Direct KPA matched for 49 of 78 observations (63%) and indirect KPA matched for 52 of 78 observations (67%). The RMS analysis of direct and indirect methods for KPA was statistically but not clinically significant, which supports the use of either method based upon availability. Video-based qualitative assessment of KPA showed moderate reliability and validity. The differences between observed and calculated KPA indicate the need for caution when relying on visual assessments for clinical interpretation, and demonstrate the value of adding KPA calculation to standard kinematic analysis. Level II-diagnostic test.
2014-01-01
Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304
Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J.
2014-01-01
Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured the validity of mediation analysis can be severely undermined. In this paper we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. PMID:25220625
Evaluating the risks of clinical research: direct comparative analysis.
Rid, Annette; Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S; Wendler, David
2014-09-01
Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed "risks of daily life" standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. This study employed a conceptual and normative analysis, and use of an illustrative example. Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the "risks of daily life" standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Direct comparative analysis is a systematic method for applying the "risks of daily life" standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about research risks.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
Price, Jeffery R.; Bingham, Philip R.
2005-11-08
Systems and methods are described for rapid acquisition of fused off-axis illumination direct-to-digital holography. A method of recording a plurality of off-axis object illuminated spatially heterodyne holograms, each of the off-axis object illuminated spatially heterodyne holograms including spatially heterodyne fringes for Fourier analysis, includes digitally recording, with a first illumination source of an interferometer, a first off-axis object illuminated spatially heterodyne hologram including spatially heterodyne fringes for Fourier analysis; and digitally recording, with a second illumination source of the interferometer, a second off-axis object illuminated spatially heterodyne hologram including spatially heterodyne fringes for Fourier analysis.
A direct-inverse method for transonic and separated flows about airfoils
NASA Technical Reports Server (NTRS)
Carlson, K. D.
1985-01-01
A direct-inverse technique and computer program called TAMSEP that can be sued for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicing the flowfield about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.
A direct-inverse method for transonic and separated flows about airfoils
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1990-01-01
A direct-inverse technique and computer program called TAMSEP that can be used for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicting the flow field about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.
FINCAP Analysis: A Method for Financial Capability Analysis of Air Force Contractors
1979-03-01
obtained directly from the companies or through the SEC Pu’lications contractor (in hard copy or microfiche): Disclosure, Inc., 4827 Rugby Avenue...contracts- labor union wage agreements - material prices - escalation clauses in contracts - contractor accounting methods - level of capacity utilization
NASA Astrophysics Data System (ADS)
Ji, Yi; Sun, Shanlin; Xie, Hong-Bo
2017-06-01
Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.
Applied Cognitive Task Analysis (ACTA) Methodology
1997-11-01
experienced based cognitive skills. The primary goal of this project was to develop streamlined methods of Cognitive Task Analysis that would fill this need...We have made important progression this direction. We have developed streamlined methods of Cognitive Task Analysis . Our evaluation study indicates...developed a CD-based stand alone instructional package, which will make the Applied Cognitive Task Analysis (ACTA) tools widely accessible. A survey of the
Milton, Martin J T; Wang, Jian
2003-01-01
A new isotope dilution mass spectrometry (IDMS) method for high-accuracy quantitative analysis of gases has been developed and validated by the analysis of standard mixtures of carbon dioxide in nitrogen. The method does not require certified isotopic reference materials and does not require direct measurements of the highly enriched spike. The relative uncertainty of the method is shown to be 0.2%. Reproduced with the permission of Her Majesty's Stationery Office. Copyright Crown copyright 2003.
Direct transesterification of fresh microalgal cells.
Liu, Jiao; Liu, Yanan; Wang, Haitao; Xue, Song
2015-01-01
Transesterification of lipids is a vital step during the processes of both biodiesel production and fatty acid analysis. By comparing the yields and fatty acid profiles obtained from microalgal oil and dry microalgal cells, the reliability of method for the transesterification of micro-scale samples was tested. The minimum amount of microalgal cells needed for accurate analysis was found to be approximately 300μg dry cells. This direct transesterification method of fresh cells was applied to eight microalgal species, and the results indicate that the efficiency of the developed method is identical to that of conventional method, except for Spirulina whose lipid content is very low, which means the total lipid content should been considered. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Development of Composition Skills via Directed Writing.
ERIC Educational Resources Information Center
Rahilly, Leonard J.
To alleviate problems associated with free composition as a method of foreign language writing instruction, the directed writing method was adapted for use in a college French composition course. High-quality French texts, often of only a page or two and written by native speakers, are used as a basis for grammatical analysis and discussion and a…
Performance Analysis and Experimental Validation of the Direct Strain Imaging Method
Athanasios Iliopoulos; John G. Michopoulos; John C. Hermanson
2013-01-01
Direct Strain Imaging accomplishes full field measurement of the strain tensor on the surface of a deforming body, by utilizing arbitrarily oriented engineering strain measurements originating from digital imaging. In this paper an evaluation of the methodâs performance with respect to its operating parameter space is presented along with a preliminary...
NASA Astrophysics Data System (ADS)
Potter, Jennifer L.
2011-12-01
Noise and vibration has long been sought to be reduced in major industries: automotive, aerospace and marine to name a few. Products must be tested and pass certain levels of federally regulated standards before entering the market. Vibration measurements are commonly acquired using accelerometers; however limitations of this method create a need for alternative solutions. Two methods for non-contact vibration measurements are compared: Laser Vibrometry, which directly measures the surface velocity of the aluminum plate, and Nearfield Acoustic Holography (NAH), which measures sound pressure in the nearfield, and using Green's Functions, reconstructs the surface velocity at the plate. The surface velocity from each method is then used in modal analysis to determine the comparability of frequency, damping and mode shapes. Frequency and mode shapes are also compared to an FEA model. Laser Vibrometry is a proven, direct method for determining surface velocity and subsequently calculating modal analysis results. NAH is an effective method in locating noise sources, especially those that are not well separated spatially. Little work has been done in incorporating NAH into modal analysis.
Quantitation of Mycotoxins Using Direct Analysis in Real Time Mass Spectrometry (DART-MS).
Busman, Mark
2018-05-01
Ambient ionization represents a new generation of MS ion sources and is used for the rapid ionization of small molecules under ambient conditions. The combination of ambient ionization and MS allows the analysis of multiple food samples with simple or no sample treatment or in conjunction with prevailing sample preparation methods. Two ambient ionization methods, desorptive electrospray ionization (DESI) and direct analysis in real time (DART) have been adapted for food safety application. Both ionization techniques provide unique advantages and capabilities. DART has been used for a variety of qualitative and quantitative applications. In particular, mycotoxin contamination of food and feed materials has been addressed by DART-MS. Applications to mycotoxin analysis by ambient ionization MS and particularly DART-MS are summarized.
ERIC Educational Resources Information Center
Himle, Michael B.; Chang, Susanna; Woods, Douglas W.; Pearlman, Amanda; Buzzella, Brian; Bunaciu, Liviu; Piacentini, John C.
2006-01-01
Behavior analysis has been at the forefront in establishing effective treatments for children and adults with chronic tic disorders. As is customary in behavior analysis, the efficacy of these treatments has been established using direct-observation assessment methods. Although behavior-analytic treatments have enjoyed acceptance and integration…
Classical Item Analysis Using Latent Variable Modeling: A Note on a Direct Evaluation Procedure
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2011-01-01
A directly applicable latent variable modeling procedure for classical item analysis is outlined. The method allows one to point and interval estimate item difficulty, item correlations, and item-total correlations for composites consisting of categorical items. The approach is readily employed in empirical research and as a by-product permits…
Militello, L G; Hutton, R J
1998-11-01
Cognitive task analysis (CTA) is a set of methods for identifying cognitive skills, or mental demands, needed to perform a task proficiently. The product of the task analysis can be used to inform the design of interfaces and training systems. However, CTA is resource intensive and has previously been of limited use to design practitioners. A streamlined method of CTA, Applied Cognitive Task Analysis (ACTA), is presented in this paper. ACTA consists of three interview methods that help the practitioner to extract information about the cognitive demands and skills required for a task. ACTA also allows the practitioner to represent this information in a format that will translate more directly into applied products, such as improved training scenarios or interface recommendations. This paper will describe the three methods, an evaluation study conducted to assess the usability and usefulness of the methods, and some directions for future research for making cognitive task analysis accessible to practitioners. ACTA techniques were found to be easy to use, flexible, and to provide clear output. The information and training materials developed based on ACTA interviews were found to be accurate and important for training purposes.
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
Zhang, Shangjian; Zou, Xinhai; Wang, Heng; Zhang, Yali; Lu, Rongguo; Liu, Yong
2015-10-15
A calibration-free electrical method is proposed for measuring the absolute frequency response of directly modulated semiconductor lasers based on additional modulation. The method achieves the electrical domain measurement of the modulation index of directly modulated lasers without the need for correcting the responsivity fluctuation in the photodetection. Moreover, it doubles measuring frequency range by setting a specific frequency relationship between the direct and additional modulation. Both the absolute and relative frequency response of semiconductor lasers are experimentally measured from the electrical spectrum of the twice-modulated optical signal, and the measured results are compared to those obtained with conventional methods to check the consistency. The proposed method provides calibration-free and accurate measurement for high-speed semiconductor lasers with high-resolution electrical spectrum analysis.
A combined direct/inverse three-dimensional transonic wing design method for vector computers
NASA Technical Reports Server (NTRS)
Weed, R. A.; Carlson, L. A.; Anderson, W. K.
1984-01-01
A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.
Structural Equation Models in a Redundancy Analysis Framework With Covariates.
Lovaglio, Pietro Giorgio; Vittadini, Giorgio
2014-01-01
A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.
Watanabe, T; Tokunaga, R; Iwahana, T; Tati, M; Ikeda, M
1978-01-01
The direct chelation-extraction method, originally developed by Hessel (1968) for blood lead analysis, has been successfully applied to urinalysis for manganese. The analyses of 35 urine samples containing up to 100 microgram/1 of manganese from manganese-exposed workers showed that the data obtained by this method agree well with those by wet digestion-flame atomic absorption spectrophotometry and also by flameless atomic absorption spectrophotometry. PMID:629893
Manfredi, Marcello; Robotti, Elisa; Bearman, Greg; France, Fenella; Barberis, Elettra; Shor, Pnina; Marengo, Emilio
2016-01-01
Today the long-term conservation of cultural heritage is a big challenge: often the artworks were subjected to unknown interventions, which eventually were found to be harmful. The noninvasive investigation of the conservation treatments to which they were subjected to is a crucial step in order to undertake the best conservation strategies. We describe here the preliminary results on a quick and direct method for the nondestructive identification of the various interventions of parchment by means of direct analysis in real time (DART) ionization and high-resolution time-of-flight mass spectrometry and chemometrics. The method has been developed for the noninvasive analysis of the Dead Sea Scrolls, one of the most important archaeological discoveries of the 20th century. In this study castor oil and glycerol parchment treatments, prepared on new parchment specimens, were investigated in order to evaluate two different types of operations. The method was able to identify both treatments. In order to investigate the effect of the ion source temperature on the mass spectra, the DART-MS analysis was also carried out at several temperatures. Due to the high sensitivity, simplicity, and no sample preparation requirement, the proposed analytical methodology could help conservators in the challenging analysis of unknown treatments in cultural heritage.
NASA Astrophysics Data System (ADS)
Campbell, Ian S.; Ton, Alain T.; Mulligan, Christopher C.
2011-07-01
An ambient mass spectrometric method based on desorption electrospray ionization (DESI) has been developed to allow rapid, direct analysis of contaminated water samples, and the technique was evaluated through analysis of a wide array of pharmaceutical and personal care product (PPCP) contaminants. Incorporating direct infusion of aqueous sample and thermal assistance into the source design has allowed low ppt detection limits for the target analytes in drinking water matrices. With this methodology, mass spectral information can be collected in less than 1 min, consuming ~100 μL of total sample. Quantitative ability was also demonstrated without the use of an internal standard, yielding decent linearity and reproducibility. Initial results suggest that this source configuration is resistant to carryover effects and robust towards multi-component samples. The rapid, continuous analysis afforded by this method offers advantages in terms of sample analysis time and throughput over traditional hyphenated mass spectrometric techniques.
Campbell, Ian S; Ton, Alain T; Mulligan, Christopher C
2011-07-01
An ambient mass spectrometric method based on desorption electrospray ionization (DESI) has been developed to allow rapid, direct analysis of contaminated water samples, and the technique was evaluated through analysis of a wide array of pharmaceutical and personal care product (PPCP) contaminants. Incorporating direct infusion of aqueous sample and thermal assistance into the source design has allowed low ppt detection limits for the target analytes in drinking water matrices. With this methodology, mass spectral information can be collected in less than 1 min, consuming ~100 μL of total sample. Quantitative ability was also demonstrated without the use of an internal standard, yielding decent linearity and reproducibility. Initial results suggest that this source configuration is resistant to carryover effects and robust towards multi-component samples. The rapid, continuous analysis afforded by this method offers advantages in terms of sample analysis time and throughput over traditional hyphenated mass spectrometric techniques.
Accuracy of AFM force distance curves via direct solution of the Euler-Bernoulli equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eppell, Steven J., E-mail: steven.eppell@case.edu; Liu, Yehe; Zypman, Fredy R.
2016-03-15
In an effort to improve the accuracy of force-separation curves obtained from atomic force microscope data, we compare force-separation curves computed using two methods to solve the Euler-Bernoulli equation. A recently introduced method using a direct sequential forward solution, Causal Time-Domain Analysis, is compared against a previously introduced Tikhonov Regularization method. Using the direct solution as a benchmark, it is found that the regularization technique is unable to reproduce accurate curve shapes. Using L-curve analysis and adjusting the regularization parameter, λ, to match either the depth or the full width at half maximum of the force curves, the two techniquesmore » are contrasted. Matched depths result in full width at half maxima that are off by an average of 27% and matched full width at half maxima produce depths that are off by an average of 109%.« less
Sociometric Indicators of Leadership: An Exploratory Analysis
2018-01-01
streamline existing observational protocols and assessment methods . This research provides an initial test of sociometric badges in the context of the U.S...understand, the requirements of the mission. Traditional research and assessment methods focusing on leader and follower interactions require direct...based methods of social network analysis. Novel Measures of Leadership Building on these findings and earlier research , it is apparent that
Takase, Kazuma; Watanabe, Ikuya; Kurogi, Tadafumi; Murata, Hiroshi
2015-01-01
This study assessed methods for evaluation of glass transition temperature (Tg) of autopolymerized hard direct denture reline resins using dynamic mechanical analysis and differential scanning calorimetry in addition to the dynamic mechanical properties. The Tg values of 3 different reline resins were determined using a dynamic viscoelastometer and differential scanning calorimeter, and rheological parameters were also determined. Although all materials exhibited higher storage modulus and loss modulus values, and a lower loss tangent at 37˚C with a higher frequency, the frequency dependence was not large. Tg values obtained by dynamic mechanical analysis were higher than those by differential scanning calorimetry and higher frequency led to higher Tg, while more stable Tg values were also obtained by that method. These results suggest that dynamic mechanical analysis is more advantageous for characterization of autopolymerized hard direct denture reline resins than differential scanning calorimetry.
Reliability Validation and Improvement Framework
2012-11-01
systems . Steps in that direction include the use of the Architec- ture Tradeoff Analysis Method ® (ATAM®) developed at the Carnegie Mellon...embedded software • cyber - physical systems (CPSs) to indicate that the embedded software interacts with, manag - es, and controls a physical system [Lee...the use of formal static analysis methods to increase our confidence in system operation beyond testing. However, analysis results
Transonic airfoil analysis and design in nonuniform flow
NASA Technical Reports Server (NTRS)
Chang, J. F.; Lan, C. E.
1986-01-01
A nonuniform transonic airfoil code is developed for applications in analysis, inverse design and direct optimization involving an airfoil immersed in propfan slipstream. Problems concerning the numerical stability, convergence, divergence and solution oscillations are discussed. The code is validated by comparing with some known results in incompressible flow. A parametric investigation indicates that the airfoil lift-drag ratio can be increased by decreasing the thickness ratio. A better performance can be achieved if the airfoil is located below the slipstream center. Airfoil characteristics designed by the inverse method and a direct optimization are compared. The airfoil designed with the method of direct optimization exhibits better characteristics and achieves a gain of 22 percent in lift-drag ratio with a reduction of 4 percent in thickness.
NASA Technical Reports Server (NTRS)
Darras, R.
1979-01-01
The various types of nuclear chemical analysis methods are discussed. The possibilities of analysis through activation and direct observation of nuclear reactions are described. Such methods make it possible to analyze trace elements and impurities with selectivity, accuracy, and a high degree of sensitivity. Such methods are used in measuring major elements present in materials which are available for analysis only in small quantities. These methods are well suited to superficial analyses and to determination of concentration gradients; provided the nature and energy of the incident particles are chosen judiciously. Typical examples of steels, pure iron and refractory metals are illustrated.
Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw
2017-01-01
Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare . However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes.
Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw
2017-01-01
Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes. PMID:29250096
NASA Astrophysics Data System (ADS)
Jiang, Cheng-Yong; Sun, Shi-Hao; Zhang, Qi-Dong; Liu, Jun-Hui; Zhang, Jian-Xun; Zong, Yong-Li; Xie, Jian-Ping
2013-03-01
A method with atmospheric pressure chemical ionization tandem mass spectrometry (APCI-MS/MS) was developed and applied to direct analysis of Environmental Tobacco Smoke (ETS), using 3-ethenylpyridine (3-EP) as a vapour-phase marker. In this study, the ion source of APCI-MS/MS was modified and direct analysis of gas sample was achieved by the modified instrument. ETS samples were directly introduced, via an atmospheric pressure inlet, into the APCI source. Ionization was carried out in positive-ion APCI mode and 3-EP was identified by both full scan mode and daughter scan mode. Quantification of 3-EP was performed by multiple reaction monitoring (MRM) mode. The calibration curve was obtained in the range of 1-250 ng L-1 with a satisfactory regression coefficient of 0.999. The limit of detection (LOD) and the limit of quantification (LOQ) were 0.5 ng L-1 and 1.6 ng L-1, respectively. The precision of the method, calculated as relative standard deviation (RSD), was characterized by repeatability (RSD 3.92%) and reproducibility (RSD 4.81%), respectively. In real-world ETS samples analysis, compared with the conventional GC-MS method, the direct APCI-MS/MS has shown better reliability and practicability in the determination of 3-EP at trace level. The developed method is simple, fast, sensitive and repeatable; furthermore, it could provide an alternative way for the determination of other volatile pollutants in ambient air at low levels.
Arnold, David; Girling, Alan; Stevens, Andrew; Lilford, Richard
2009-07-22
Utilities (values representing preferences) for healthcare priority setting are typically obtained indirectly by asking patients to fill in a quality of life questionnaire and then converting the results to a utility using population values. We compared such utilities with those obtained directly from patients or the public. Review of studies providing both a direct and indirect utility estimate. Papers reporting comparisons of utilities obtained directly (standard gamble or time tradeoff) or indirectly (European quality of life 5D [EQ-5D], short form 6D [SF-6D], or health utilities index [HUI]) from the same patient. PubMed and Tufts database of utilities. Sign test for paired comparisons between direct and indirect utilities; least squares regression to describe average relations between the different methods. Mean utility scores (or median if means unavailable) for each method, and differences in mean (median) scores between direct and indirect methods. We found 32 studies yielding 83 instances where direct and indirect methods could be compared for health states experienced by adults. The direct methods used were standard gamble in 57 cases and time trade off in 60(34 used both); the indirect methods were EQ-5D (67 cases), SF-6D (13), HUI-2 (5), and HUI-3 (37). Mean utility values were 0.81 (standard gamble) and 0.77 (time tradeoff) for the direct methods; for the indirect methods: 0.59(EQ-5D), 0.63 (SF-6D), 0.75 (HUI-2) and 0.68 (HUI-3). Direct methods of estimating utilities tend to result in higher health ratings than the more widely used indirect methods, and the difference can be substantial.Use of indirect methods could have important implications for decisions about resource allocation: for example, non-lifesaving treatments are relatively more favoured in comparison with lifesaving interventions than when using direct methods.
Tewfik, Ihab
2008-01-01
2-Alkylcyclobutanones (cyclobutanones) are accepted as chemical markers for irradiated foods containing lipid. However, current extraction procedures (Soxhlet-florisil chromatography) for the isolation of these markers involve a long and tedious clean-up regime prior to gas chromatography-mass spectrophotometry identification. This paper outlines an alternative isolation and clean-up method for the extraction of cyclobutanones in irradiated Camembert cheese. The newly developed direct solvent extraction method enables the efficient screening of large numbers of food samples and is not as resource intensive as the BS EN 1785:1997 method. Direct solvent extraction appears to be a simple, robust method and has the added advantage of a considerably shorter extraction time for the analysis of foods containing lipid.
Solving complex photocycle kinetics. Theory and direct method.
Nagle, J F
1991-01-01
A direct nonlinear least squares method is described that obtains the true kinetic rate constants and the temperature-independent spectra of n intermediates from spectroscopic data taken in the visible at three or more temperatures. A theoretical analysis, which is independent of implementation of the direct method, proves that well determined local solutions are not possible for fewer than three temperatures. This analysis also proves that measurements at more than n wavelengths are redundant, although the direct method indicates that convergence is faster if n + m wavelengths are measured, where m is of order one. This suggests that measurements should concentrate on high precision for a few measuring wavelengths, rather than lower precision for many wavelengths. Globally, false solutions occur, and the ability to reject these depends upon the precision of the data, as shown by explicit example. An optimized way to analyze vibrational spectroscopic data is also presented. Such data yield unique results, which are comparably accurate to those obtained from data taken in the visible with comparable noise. It is discussed how use of both kinds of data is advantageous if the data taken in the visible are significantly less noisy. PMID:2009362
System and method for chromatography and electrophoresis using circular optical scanning
Balch, Joseph W.; Brewer, Laurence R.; Davidson, James C.; Kimbrough, Joseph R.
2001-01-01
A system and method is disclosed for chromatography and electrophoresis using circular optical scanning. One or more rectangular microchannel plates or radial microchannel plates has a set of analysis channels for insertion of molecular samples. One or more scanning devices repeatedly pass over the analysis channels in one direction at a predetermined rotational velocity and with a predetermined rotational radius. The rotational radius may be dynamically varied so as to monitor the molecular sample at various positions along a analysis channel. Sample loading robots may also be used to input molecular samples into the analysis channels. Radial microchannel plates are built from a substrate whose analysis channels are disposed at a non-parallel angle with respect to each other. A first step in the method accesses either a rectangular or radial microchannel plate, having a set of analysis channels, and second step passes a scanning device repeatedly in one direction over the analysis channels. As a third step, the scanning device is passed over the analysis channels at dynamically varying distances from a centerpoint of the scanning device. As a fourth step, molecular samples are loaded into the analysis channels with a robot.
NASA Astrophysics Data System (ADS)
Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.
2017-03-01
Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.
Psychometric Properties of a Korean Measure of Person-Directed Care in Nursing Homes
ERIC Educational Resources Information Center
Choi, Jae-Sung; Lee, Minhong
2014-01-01
Objective: This study examined the validity and reliability of a person-directed care (PDC) measure for nursing homes in Korea. Method: Managerial personnel from 223 nursing homes in 2010 and 239 in 2012 were surveyed. Results: Item analysis and exploratory factor analysis for the first sample generated a 33-item PDC measure with eight factors.…
Almukhtar, Anas; Khambay, Balvinder; Ayoub, Ashraf; Ju, Xiangyang; Al-Hiyali, Ali; Macdonald, James; Jabar, Norhayati; Goto, Tazuko
2015-01-01
The limitations of the current methods of quantifying the surgical movements of facial bones inspired this study. The aim of this study was the assessment of the accuracy and reproducibility of directly landmarking of 3D DICOM images (Digital Imaging and Communications in Medicine) to quantify the changes in the jaw bones following surgery. The study was carried out on plastic skull to simulate the surgical movements of the jaw bones. Cone beam CT scans were taken at 3mm, 6mm, and 9mm maxillary advancement; together with a 2mm, 4mm, 6mm and 8mm "down graft" which in total generated 12 different positions of the maxilla for the analysis. The movements of the maxilla were calculated using two methods, the standard approach where distances between surface landmarks on the jaw bones were measured and the novel approach where measurements were taken directly from the internal structures of the corresponding 3D DICOME slices. A one sample t-test showed that there was no statistically significant difference between the two methods of measurements for the y and z directions, however, the x direction showed a significant difference. The mean difference between the two absolute measurements were 0.34±0.20mm, 0.22±0.16mm, 0.18±0.13mm in the y, z and x directions respectively. In conclusion, the direct landmarking of 3D DICOM image slices is a reliable, reproducible and informative method for assessment of the 3D skeletal changes. The method has a clear clinical application which includes the analysis of the jaw movements "orthognathic surgery" for the correction of facial deformities.
Selective Enrichment and Direct Analysis of Protein S-Palmitoylation Sites.
Thinon, Emmanuelle; Fernandez, Joseph P; Molina, Henrik; Hang, Howard C
2018-05-04
S-Fatty-acylation is the covalent attachment of long chain fatty acids, predominately palmitate (C16:0, S-palmitoylation), to cysteine (Cys) residues via a thioester linkage on proteins. This post-translational and reversible lipid modification regulates protein function and localization in eukaryotes and is important in mammalian physiology and human diseases. While chemical labeling methods have improved the detection and enrichment of S-fatty-acylated proteins, mapping sites of modification and characterizing the endogenously attached fatty acids are still challenging. Here, we describe the integration and optimization of fatty acid chemical reporter labeling with hydroxylamine-mediated enrichment of S-fatty-acylated proteins and direct tagging of modified Cys residues to selectively map lipid modification sites. This afforded improved enrichment and direct identification of many protein S-fatty-acylation sites compared to previously described methods. Notably, we directly identified the S-fatty-acylation sites of IFITM3, an important interferon-stimulated inhibitor of virus entry, and we further demonstrated that the highly conserved Cys residues are primarily modified by palmitic acid. The methods described here should facilitate the direct analysis of protein S-fatty-acylation sites and their endogenously attached fatty acids in diverse cell types and activation states important for mammalian physiology and diseases.
How memory of direct animal interactions can lead to territorial pattern formation.
Potts, Jonathan R; Lewis, Mark A
2016-05-01
Mechanistic home range analysis (MHRA) is a highly effective tool for understanding spacing patterns of animal populations. It has hitherto focused on populations where animals defend their territories by communicating indirectly, e.g. via scent marks. However, many animal populations defend their territories using direct interactions, such as ritualized aggression. To enable application of MHRA to such populations, we construct a model of direct territorial interactions, using linear stability analysis and energy methods to understand when territorial patterns may form. We show that spatial memory of past interactions is vital for pattern formation, as is memory of 'safe' places, where the animal has visited but not suffered recent territorial encounters. Additionally, the spatial range over which animals make decisions to move is key to understanding the size and shape of their resulting territories. Analysis using energy methods, on a simplified version of our system, shows that stability in the nonlinear system corresponds well to predictions of linear analysis. We also uncover a hysteresis in the process of territory formation, so that formation may depend crucially on initial space-use. Our analysis, in one dimension and two dimensions, provides mathematical groundwork required for extending MHRA to situations where territories are defended by direct encounters. © 2016 The Author(s).
Grimes, D.J.; Marranzino, A.P.
1968-01-01
Two spectrographic methods are used in mobile field laboratories of the U. S. Geological Survey. In the direct-current arc method, the ground sample is mixed with graphite powder, packed into an electrode crater, and burned to completion. Thirty elements are determined. In the spark method, the sample, ground to pass a 150-mesh screen, is digested in hydrofluoric acid followed by evaporation to dryness and dissolution in aqua regia. The solution is fed into the spark gap by means of a rotating-disk electrode arrangement and is excited with an alternating-current spark discharge. Fourteen elements are determined. In both techniques, light is recorded on Spectrum Analysis No. 1, 35-millimeter film, and the spectra are compared visually with those of standard films.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Jordana R.; Gill, Gary A.; Kuo, Li-Jung
2016-04-20
Trace element determinations in seawater by inductively coupled plasma mass spectrometry are analytically challenging due to the typically very low concentrations of the trace elements and the potential interference of the salt matrix. In this study, we did a comparison for uranium analysis using inductively coupled plasma mass spectrometry (ICP-MS) of Sequim Bay seawater samples and three seawater certified reference materials (SLEW-3, CASS-5 and NASS-6) using seven different analytical approaches. The methods evaluated include: direct analysis, Fe/Pd reductive precipitation, standard addition calibration, online automated dilution using an external calibration with and without matrix matching, and online automated pre-concentration. The methodmore » which produced the most accurate results was the method of standard addition calibration, recovering uranium from a Sequim Bay seawater sample at 101 ± 1.2%. The on-line preconcentration method and the automated dilution with matrix-matched calibration method also performed well. The two least effective methods were the direct analysis and the Fe/Pd reductive precipitation using sodium borohydride« less
Lizier, Joseph T; Heinzle, Jakob; Horstmann, Annette; Haynes, John-Dylan; Prokopenko, Mikhail
2011-02-01
The human brain undertakes highly sophisticated information processing facilitated by the interaction between its sub-regions. We present a novel method for interregional connectivity analysis, using multivariate extensions to the mutual information and transfer entropy. The method allows us to identify the underlying directed information structure between brain regions, and how that structure changes according to behavioral conditions. This method is distinguished in using asymmetric, multivariate, information-theoretical analysis, which captures not only directional and non-linear relationships, but also collective interactions. Importantly, the method is able to estimate multivariate information measures with only relatively little data. We demonstrate the method to analyze functional magnetic resonance imaging time series to establish the directed information structure between brain regions involved in a visuo-motor tracking task. Importantly, this results in a tiered structure, with known movement planning regions driving visual and motor control regions. Also, we examine the changes in this structure as the difficulty of the tracking task is increased. We find that task difficulty modulates the coupling strength between regions of a cortical network involved in movement planning and between motor cortex and the cerebellum which is involved in the fine-tuning of motor control. It is likely these methods will find utility in identifying interregional structure (and experimentally induced changes in this structure) in other cognitive tasks and data modalities.
Chen, Yi; Pouillot, Régis; S Burall, Laurel; Strain, Errol A; Van Doren, Jane M; De Jesus, Antonio J; Laasri, Anna; Wang, Hua; Ali, Laila; Tatavarthy, Aparna; Zhang, Guodong; Hu, Lijun; Day, James; Sheth, Ishani; Kang, Jihun; Sahu, Surasri; Srinivasan, Devayani; Brown, Eric W; Parish, Mickey; Zink, Donald L; Datta, Atin R; Hammack, Thomas S; Macarisin, Dumitru
2017-01-16
A precise and accurate method for enumeration of low level of Listeria monocytogenes in foods is critical to a variety of studies. In this study, paired comparison of most probable number (MPN) and direct plating enumeration of L. monocytogenes was conducted on a total of 1730 outbreak-associated ice cream samples that were naturally contaminated with low level of L. monocytogenes. MPN was performed on all 1730 samples. Direct plating was performed on all samples using the RAPID'L.mono (RLM) agar (1600 samples) and agar Listeria Ottaviani and Agosti (ALOA; 130 samples). Probabilistic analysis with Bayesian inference model was used to compare paired direct plating and MPN estimates of L. monocytogenes in ice cream samples because assumptions implicit in ordinary least squares (OLS) linear regression analyses were not met for such a comparison. The probabilistic analysis revealed good agreement between the MPN and direct plating estimates, and this agreement showed that the MPN schemes and direct plating schemes using ALOA or RLM evaluated in the present study were suitable for enumerating low levels of L. monocytogenes in these ice cream samples. The statistical analysis further revealed that OLS linear regression analyses of direct plating and MPN data did introduce bias that incorrectly characterized systematic differences between estimates from the two methods. Published by Elsevier B.V.
Virtual directions in paleomagnetism: A global and rapid approach to evaluate the NRM components.
NASA Astrophysics Data System (ADS)
Ramón, Maria J.; Pueyo, Emilio L.; Oliva-Urcia, Belén; Larrasoaña, Juan C.
2017-02-01
We introduce a method and software to process demagnetization data for a rapid and integrative estimation of characteristic remanent magnetization (ChRM) components. The virtual directions (VIDI) of a paleomagnetic site are “all” possible directions that can be calculated from a given demagnetization routine of “n” steps (being m the number of specimens in the site). If the ChRM can be defined for a site, it will be represented in the VIDI set. Directions can be calculated for successive steps using principal component analysis, both anchored to the origin (resultant virtual directions RVD; m * (n2+n)/2) and not anchored (difference virtual directions DVD; m * (n2-n)/2). The number of directions per specimen (n2) is very large and will enhance all ChRM components with noisy regions where two components were fitted together (mixing their unblocking intervals). In the same way, resultant and difference virtual circles (RVC, DVC) are calculated. Virtual directions and circles are a global and objective approach to unravel different natural remanent magnetization (NRM) components for a paleomagnetic site without any assumption. To better constrain the stable components, some filters can be applied, such as establishing an upper boundary to the MAD, removing samples with anomalous intensities, or stating a minimum number of demagnetization steps (objective filters) or selecting a given unblocking interval (subjective but based on the expertise). On the other hand, the VPD program also allows the application of standard approaches (classic PCA fitting of directions a circles) and other ancillary methods (stacking routine, linearity spectrum analysis) giving an objective, global and robust idea of the demagnetization structure with minimal assumptions. Application of the VIDI method to natural cases (outcrops in the Pyrenees and u-channel data from a Roman dam infill in northern Spain) and their comparison to other approaches (classic end-point, demagnetization circle analysis, stacking routine and linearity spectrum analysis) allows validation of this technique. The VIDI is a global approach and it is especially useful for large data sets and rapid estimation of the NRM components.
Coarse graining for synchronization in directed networks
NASA Astrophysics Data System (ADS)
Zeng, An; Lü, Linyuan
2011-05-01
Coarse-graining model is a promising way to analyze and visualize large-scale networks. The coarse-grained networks are required to preserve statistical properties as well as the dynamic behaviors of the initial networks. Some methods have been proposed and found effective in undirected networks, while the study on coarse-graining directed networks lacks of consideration. In this paper we proposed a path-based coarse-graining (PCG) method to coarse grain the directed networks. Performing the linear stability analysis of synchronization and numerical simulation of the Kuramoto model on four kinds of directed networks, including tree networks and variants of Barabási-Albert networks, Watts-Strogatz networks, and Erdös-Rényi networks, we find our method can effectively preserve the network synchronizability.
Generalized fourier analyses of the advection-diffusion equation - Part II: two-dimensional domains
NASA Astrophysics Data System (ADS)
Voth, Thomas E.; Martinez, Mario J.; Christon, Mark A.
2004-07-01
Part I of this work presents a detailed multi-methods comparison of the spatial errors associated with the one-dimensional finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. In Part II we extend the analysis to two-dimensional domains and also consider the effects of wave propagation direction and grid aspect ratio on the phase speed, and the discrete and artificial diffusivities. The observed dependence of dispersive and diffusive behaviour on propagation direction makes comparison of methods more difficult relative to the one-dimensional results. For this reason, integrated (over propagation direction and wave number) error and anisotropy metrics are introduced to facilitate comparison among the various methods. With respect to these metrics, the consistent mass Galerkin and consistent mass control-volume finite element methods, and their streamline upwind derivatives, exhibit comparable accuracy, and generally out-perform their lumped mass counterparts and finite-difference based schemes. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework. Published in 2004 by John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Koenig, Herbert A.; Chan, Kwai S.; Cassenti, Brice N.; Weber, Richard
1988-01-01
A unified numerical method for the integration of stiff time dependent constitutive equations is presented. The solution process is directly applied to a constitutive model proposed by Bodner. The theory confronts time dependent inelastic behavior coupled with both isotropic hardening and directional hardening behaviors. Predicted stress-strain responses from this model are compared to experimental data from cyclic tests on uniaxial specimens. An algorithm is developed for the efficient integration of the Bodner flow equation. A comparison is made with the Euler integration method. An analysis of computational time is presented for the three algorithms.
Geometrical characterization of perlite-metal syntactic foam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borovinšek, Matej, E-mail: matej.borovinsek@um.si
This paper introduces an improved method for the detailed geometrical characterization of perlite-metal syntactic foam. This novel metallic foam is created by infiltrating a packed bed of expanded perlite particles with liquid aluminium alloy. The geometry of the solidified metal is thus defined by the perlite particle shape, size and morphology. The method is based on a segmented micro-computed tomography data and allows for automated determination of the distributions of pore size, sphericity, orientation and location. The pore (i.e. particle) size distribution and pore orientation is determined by a multi-criteria k-nearest neighbour algorithm for pore identification. The results indicate amore » weak density gradient parallel to the casting direction and a slight preference of particle orientation perpendicular to the casting direction. - Highlights: •A new method for identification of pores in porous materials was developed. •It was applied on perlite-metal syntactic foam samples. •A porosity decrease in the axial direction of the samples was determined. •Pore shape analysis showed a high percentage of spherical pores. •Orientation analysis showed that more pores are oriented in the radial direction.« less
Duyvejonck, Hans; Cools, Piet; Decruyenaere, Johan; Roelens, Kristien; Noens, Lucien; Vermeulen, Stefan; Claeys, Geert; Decat, Ellen; Van Mechelen, Els; Vaneechoutte, Mario
2015-01-01
Candida species are known as opportunistic pathogens, and a possible cause of invasive infections. Because of their species-specific antimycotic resistance patterns, reliable techniques for their detection, quantification and identification are needed. We validated a DNA amplification method for direct detection of Candida spp. from clinical samples, namely the ITS2-High Resolution Melting Analysis (direct method), by comparing it with a culture and MALDI-TOF Mass Spectrometry based method (indirect method) to establish the presence of Candida species in three different types of clinical samples. A total of 347 clinical samples, i.e. throat swabs, rectal swabs and vaginal swabs, were collected from the gynaecology/obstetrics, intensive care and haematology wards at the Ghent University Hospital, Belgium. For the direct method, ITS2-HRM was preceded by NucliSENS easyMAG DNA extraction, directly on the clinical samples. For the indirect method, clinical samples were cultured on Candida ID and individual colonies were identified by MALDI-TOF. For 83.9% of the samples there was complete concordance between both techniques, i.e. the same Candida species were detected in 31.1% of the samples or no Candida species were detected in 52.8% of the samples. In 16.1% of the clinical samples, discrepant results were obtained, of which only 6.01% were considered as major discrepancies. Discrepancies occurred mostly when overall numbers of Candida cells in the samples were low and/or when multiple species were present in the sample. Most of the discrepancies could be decided in the advantage of the direct method. This is due to samples in which no yeast could be cultured whereas low amounts could be detected by the direct method and to samples in which high quantities of Candida robusta according to ITS2-HRM were missed by culture on Candida ID agar. It remains to be decided whether the diagnostic advantages of the direct method compensate for its disadvantages.
Good practices for quantitative bias analysis.
Lash, Timothy L; Fox, Matthew P; MacLehose, Richard F; Maldonado, George; McCandless, Lawrence C; Greenland, Sander
2014-12-01
Quantitative bias analysis serves several objectives in epidemiological research. First, it provides a quantitative estimate of the direction, magnitude and uncertainty arising from systematic errors. Second, the acts of identifying sources of systematic error, writing down models to quantify them, assigning values to the bias parameters and interpreting the results combat the human tendency towards overconfidence in research results, syntheses and critiques and the inferences that rest upon them. Finally, by suggesting aspects that dominate uncertainty in a particular research result or topic area, bias analysis can guide efficient allocation of sparse research resources. The fundamental methods of bias analyses have been known for decades, and there have been calls for more widespread use for nearly as long. There was a time when some believed that bias analyses were rarely undertaken because the methods were not widely known and because automated computing tools were not readily available to implement the methods. These shortcomings have been largely resolved. We must, therefore, contemplate other barriers to implementation. One possibility is that practitioners avoid the analyses because they lack confidence in the practice of bias analysis. The purpose of this paper is therefore to describe what we view as good practices for applying quantitative bias analysis to epidemiological data, directed towards those familiar with the methods. We focus on answering questions often posed to those of us who advocate incorporation of bias analysis methods into teaching and research. These include the following. When is bias analysis practical and productive? How does one select the biases that ought to be addressed? How does one select a method to model biases? How does one assign values to the parameters of a bias model? How does one present and interpret a bias analysis?. We hope that our guide to good practices for conducting and presenting bias analyses will encourage more widespread use of bias analysis to estimate the potential magnitude and direction of biases, as well as the uncertainty in estimates potentially influenced by the biases. © The Author 2014; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
3. Evaluation of unstable lands for interagency watershed analysis
Leslie M. Reid; Robert R. Ziemer; Mark E. Smith; Colin Close
1994-01-01
Abstract - Although methods for evaluating landslide rates and distributions are well developed, much less attention has been paid to evaluating the biological and physical role of landsliding. New directions in land management on Federal lands of the Pacific Northwest now require such evaluations for designing Riparian Reserves. Traditional analysis methods are no...
USDA-ARS?s Scientific Manuscript database
Segmentation is the first step in image analysis to subdivide an image into meaningful regions. The segmentation result directly affects the subsequent image analysis. The objective of the research was to develop an automatic adjustable algorithm for segmentation of color images, using linear suppor...
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.
2016-04-01
Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.
Fatigue analysis and testing of wind turbine blades
NASA Astrophysics Data System (ADS)
Greaves, Peter Robert
This thesis focuses on fatigue analysis and testing of large, multi MW wind turbine blades. The blades are one of the most expensive components of a wind turbine, and their mass has cost implications for the hub, nacelle, tower and foundations of the turbine so it is important that they are not unnecessarily strong. Fatigue is often an important design driver, but fatigue of composites is poorly understood and so large safety factors are often applied to the loads. This has implications for the weight of the blade. Full scale fatigue testing of blades is required by the design standards, and provides manufacturers with confidence that the blade will be able to survive its service life. This testing is usually performed by resonating the blade in the flapwise and edgewise directions separately, but in service these two loads occur at the same time.. A fatigue testing method developed at Narec (the National Renewable Energy Centre) in the UK in which the flapwise and edgewise directions are excited simultaneously has been evaluated by comparing the Palmgren-Miner damage sum around the blade cross section after testing with the damage distribution caused by the service life. A method to obtain the resonant test configuration that will result in the optimum mode shapes for the flapwise and edgewise directions was then developed, and simulation software was designed to allow the blade test to be simulated so that realistic comparisons between the damage distributions after different test types could be obtained. During the course of this work the shortcomings with conventional fatigue analysis methods became apparent, and a novel method of fatigue analysis based on multi-continuum theory and the kinetic theory of fracture was developed. This method was benchmarked using physical test data from the OPTIDAT database and was applied to the analysis of a complete blade. A full scale fatigue test method based on this new analysis approach is also discussed..
Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling
Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...
2014-07-14
Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less
Methods of quantitative and qualitative analysis of bird migration with a tracking radar
NASA Technical Reports Server (NTRS)
Bruderer, B.; Steidinger, P.
1972-01-01
Methods of analyzing bird migration by using tracking radar are discussed. The procedure for assessing the rate of bird passage is described. Three topics are presented concerning the grouping of nocturnal migrants, the velocity of migratory flight, and identification of species by radar echoes. The height and volume of migration under different weather conditions are examined. The methods for studying the directions of migration and the correlation between winds and the height and direction of migrating birds are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kihara, Toshiki
2007-09-01
A phase unwrapping method that employs scattered-light photoelasticity with unpolarized light was proposed for automated three-dimensional stress analysis [Appl. Opt. 45, 8848 (2006)]. I now demonstrate the validity of this method by performing nondestructive measurements at three different wavelengths of the secondary principal stress direction {psi}j and the total relative phase retardation {rho}jtot in the plane that contains the rotated principal stress directions in a spherical frozen stress model and compare the results obtained with mechanically sliced models. The parameters {psi}j and {rho}jtot were measured nondestructively over the entire field of view for the first time, to the best ofmore » my knowledge.« less
Google matrix analysis of directed networks
NASA Astrophysics Data System (ADS)
Ermann, Leonardo; Frahm, Klaus M.; Shepelyansky, Dima L.
2015-10-01
In the past decade modern societies have developed enormous communication and social networks. Their classification and information retrieval processing has become a formidable task for the society. Because of the rapid growth of the World Wide Web, and social and communication networks, new mathematical methods have been invented to characterize the properties of these networks in a more detailed and precise way. Various search engines extensively use such methods. It is highly important to develop new tools to classify and rank a massive amount of network information in a way that is adapted to internal network structures and characteristics. This review describes the Google matrix analysis of directed complex networks demonstrating its efficiency using various examples including the World Wide Web, Wikipedia, software architectures, world trade, social and citation networks, brain neural networks, DNA sequences, and Ulam networks. The analytical and numerical matrix methods used in this analysis originate from the fields of Markov chains, quantum chaos, and random matrix theory.
Inutan, Ellen D.; Trimpin, Sarah
2013-01-01
The introduction of electrospray ionization (ESI) and matrix-assisted laser desorption/ionization (MALDI) for the mass spectrometric analysis of peptides and proteins had a dramatic impact on biological science. We now report that a wide variety of compounds, including peptides, proteins, and protein complexes, are transported directly from a solid-state small molecule matrix to gas-phase ions when placed into the vacuum of a mass spectrometer without the use of high voltage, a laser, or added heat. This ionization process produces ions having charge states similar to ESI, making the method applicable for high performance mass spectrometers designed for atmospheric pressure ionization. We demonstrate highly sensitive ionization using intermediate pressure MALDI and modified ESI sources. This matrix and vacuum assisted soft ionization method is suitable for the direct surface analysis of biological materials, including tissue, via mass spectrometry. PMID:23242551
Egbewale, Bolaji E; Lewis, Martyn; Sim, Julius
2014-04-09
Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. 126 hypothetical trial scenarios were evaluated (126,000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power.
Polarimetric Decomposition Analysis of the Deepwater Horizon Oil Slick Using L-Band UAVSAR Data
NASA Technical Reports Server (NTRS)
Jones, Cathleen; Minchew, Brent; Holt, Benjamin
2011-01-01
We report here an analysis of the polarization dependence of L-band radar backscatter from the main slick of the Deepwater Horizon oil spill, with specific attention to the utility of polarimetric decomposition analysis for discrimination of oil from clean water and identification of variations in the oil characteristics. For this study we used data collected with the UAVSAR instrument from opposing look directions directly over the main oil slick. We find that both the Cloude-Pottier and Shannon entropy polarimetric decomposition methods offer promise for oil discrimination, with the Shannon entropy method yielding the same information as contained in the Cloude-Pottier entropy and averaged in tensity parameters, but with significantly less computational complexity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, T.F.; Thorne, P.G.; Myers, K.F.
Salting-out solvent extraction (SOE) was compared with cartridge and membrane solid-phase extraction (SPE) for preconcentration of nitroaromatics, nitramines, and aminonitroaromatics prior to determination by reversed-phase high-performance liquid chromatography. The solid phases used were manufacturer-cleaned materials, Porapak RDX for the cartridge method and Empore SDB-RPS for the membrane method. Thirty-three groundwater samples from the Naval Surface Warfare Center, Crane, Indiana, were analyzed using the direct analysis protocol specified in SW846 Method 8330, and the results were compared with analyses conducted after preconcentration using SOE with acetonitrile, cartridge-based SPE, and membrane-based SPE. For high-concentration samples, analytical results from the three preconcentration techniquesmore » were compared with results from the direct analysis protocol; good recovery of all target analytes was achieved by all three pre-concentration methods. For low-concentration samples, results from the two SPE methods were correlated with results from the SOE method; very similar data was obtained by the SOE and SPE methods, even at concentrations well below 1 microgram/L.« less
PCR Testing of IVC Filter Tops as a Method for Detecting Murine Pinworms and Fur Mites.
Gerwin, Philip M; Ricart Arbona, Rodolfo J; Riedel, Elyn R; Henderson, Kenneth S; Lipman, Neil S
2017-11-01
We evaluated PCR testing of filter tops from cages maintained on an IVC system through which exhaust air is filtered at the cage level as a method for detecting parasite-infected and -infested cages. Cages containing 4 naïve Swiss Webster mice received 360 mL of uncontaminated aspen chip or α-cellulose bedding (n = 18 cages each) and 60 mL of the same type of bedding weekly from each of the following 4 groups of cages housing mice infected or infested with Syphacia obvelata (SO), Aspiculuris tetraptera (AT), Myocoptes musculinus (MC), or Myobia musculi (MB) and Radfordia affinis (RA; 240 mL bedding total). Detection rates were compared at 30, 60, and 90 d after initiating bedding exposure, by using PCR analysis of filter tops (media extract and swabs) and testing of mouse samples (fur swab [direct] PCR testing, fecal flotation, anal tape test, direct examination of intestinal contents, and skin scrape). PCR testing of filter media extract detected 100% of all parasites at 30 d (both bedding types) except for AT (α-cellulose bedding, 67% detection rate); identified more cages with fur mites (MB and MC) than direct PCR when cellulose bedding was used; and was better at detecting parasites than all nonmolecular methods evaluated. PCR analysis of filter media extract was superior to swab and direct PCR for all parasites cumulatively for each bedding type. Direct PCR more effectively detected MC and all parasites combined for aspen chip compared with cellulose bedding. PCR analysis of filter media extract for IVC systems in which exhaust air is filtered at the cage level was shown to be a highly effective environmental testing method.
Lara-Ortega, Felipe J; Beneito-Cambra, Miriam; Robles-Molina, José; García-Reyes, Juan F; Gilbert-López, Bienvenida; Molina-Díaz, Antonio
2018-04-01
Analytical methods based on ambient ionization mass spectrometry (AIMS) combine the classic outstanding performance of mass spectrometry in terms of sensitivity and selectivity along with convenient features related to the lack of sample workup required. In this work, the performance of different mass spectrometry-based methods has been assessed for the direct analyses of virgin olive oil for quality purposes. Two sets of experiments have been setup: (1) direct analysis of untreated olive oil using AIMS methods such as Low-Temperature Plasma Mass Spectrometry (LTP-MS) or paper spray mass spectrometry (PS-MS); or alternatively (2) the use of atmospheric pressure ionization (API) mass spectrometry by direct infusion of a diluted sample through either atmospheric pressure chemical ionization (APCI) or electrospray (ESI) ionization sources. The second strategy involved a minimum sample work-up consisting of a simple olive oil dilution (from 1:10 to 1:1000) with appropriate solvents, which originated critical carry over effects in ESI, making unreliable its use in routine; thus, ESI required the use of a liquid-liquid extraction to shift the measurement towards a specific part of the composition of the edible oil (i.e. polyphenol rich fraction or lipid/fatty acid profile). On the other hand, LTP-MS enabled direct undiluted mass analysis of olive oil. The use of PS-MS provided additional advantages such as an extended ionization coverage/molecular weight range (compared to LTP-MS) and the possibility to increase the ionization efficiency towards nonpolar compounds such as squalene through the formation of Ag + adducts with carbon-carbon double bounds, an attractive feature to discriminate between oils with different degree of unsaturation. Copyright © 2017 Elsevier B.V. All rights reserved.
PCR Testing of IVC Filter Tops as a Method for Detecting Murine Pinworms and Fur Mites
Gerwin, Philip M; Arbona, Rodolfo J Ricart; Riedel, Elyn R; Henderson, Kenneth S; Lipman, Neil S
2017-01-01
We evaluated PCR testing of filter tops from cages maintained on an IVC system through which exhaust air is filtered at the cage level as a method for detecting parasite- infected and -infested cages. Cages containing 4 naïve Swiss Webster mice received 360 mL of uncontaminated aspen chip or α-cellulose bedding (n = 18 cages each) and 60 mL of the same type of bedding weekly from each of the following 4 groups of cages housing mice infected or infested with Syphacia obvelata (SO), Aspiculuris tetraptera (AT), Myocoptes musculinus (MC), or Myobia musculi (MB) and Radfordia affinis (RA; 240 mL bedding total). Detection rates were compared at 30, 60, and 90 d after initiating bedding exposure, by using PCR analysis of filter tops (media extract and swabs) and testing of mouse samples (fur swab [direct] PCR testing, fecal flotation, anal tape test, direct examination of intestinal contents, and skin scrape). PCR testing of filter media extract detected 100% of all parasites at 30 d (both bedding types) except for AT (α-cellulose bedding, 67% detection rate); identified more cages with fur mites (MB and MC) than direct PCR when cellulose bedding was used; and was better at detecting parasites than all nonmolecular methods evaluated. PCR analysis of filter media extract was superior to swab and direct PCR for all parasites cumulatively for each bedding type. Direct PCR more effectively detected MC and all parasites combined for aspen chip compared with cellulose bedding. PCR analysis of filter media extract for IVC systems in which exhaust air is filtered at the cage level was shown to be a highly effective environmental testing method. PMID:29256370
Danhelova, Hana; Hradecky, Jaromir; Prinosilova, Sarka; Cajka, Tomas; Riddellova, Katerina; Vaclavik, Lukas; Hajslova, Jana
2012-07-01
The development and use of a fast method employing a direct analysis in real time (DART) ion source coupled to high-resolution time-of-flight mass spectrometry (TOFMS) for the quantitative analysis of caffeine in various coffee samples has been demonstrated in this study. A simple sample extraction procedure employing hot water was followed by direct, high-throughput (<1 min per run) examination of the extracts spread on a glass rod under optimized conditions of ambient mass spectrometry, without any prior chromatographic separation. For quantification of caffeine using DART-TOFMS, an external calibration was used. Isotopically labeled caffeine was used to compensate for the variations of the ion intensities of caffeine signal. Recoveries of the DART-TOFMS method were 97% for instant coffee at the spiking levels of 20 and 60 mg/g, respectively, while for roasted ground coffee, the obtained values were 106% and 107% at the spiking levels of 10 and 30 mg/g, respectively. The repeatability of the whole analytical procedure (expressed as relative standard deviation, RSD, %) was <5% for all tested spiking levels and matrices. Since the linearity range of the method was relatively narrow (two orders of magnitude), an optimization of sample dilution prior the DART-TOFMS measurement to avoid saturation of the detector was needed.
SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, R E; Mayeda, K; Walter, W R
2007-07-10
The objectives of this study are to improve low-magnitude regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge of source scaling at small magnitudes (i.e., m{sub b}more » < {approx}4.0) is poorly resolved. It is not clear whether different studies obtain contradictory results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods. We begin by developing and improving the two different methods, and then in future years we will apply them both to each set of earthquakes. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But there are only a limited number of earthquakes that are recorded locally, by sufficient stations to give good azimuthal coverage, and have very closely located smaller earthquakes that can be used as an empirical Green's function (EGF) to remove path effects. In contrast, coda waves average radiation from all directions so single-station records should be adequate, and previous work suggests that the requirements for the EGF event are much less stringent. We can study more earthquakes using the coda-wave methods, while using direct wave methods for the best recorded subset of events so as to investigate any differences between the results of the two approaches. Finding 'perfect' EGF events for direct wave analysis is difficult, as is ascertaining the quality of a particular EGF event. We develop a multi-taper method to obtain time-domain source-time-functions by frequency division. If an earthquake and EGF event pair are able to produce a clear, time-domain source pulse then we accept the EGF event. We then model the spectral (amplitude) ratio to determine source parameters from both direct P and S waves. We use the well-recorded sequence of aftershocks of the M5 Au Sable Forks, NY, earthquake to test the method and also to obtain some of the first accurate source parameters for small earthquakes in eastern North America. We find that the stress drops are high, confirming previous work suggesting that intraplate continental earthquakes have higher stress drops than events at plate boundaries. We simplify and improve the coda wave analysis method by calculating spectral ratios between different sized earthquakes. We first compare spectral ratio performance between local and near-regional S and coda waves in the San Francisco Bay region for moderate-sized events. The average spectral ratio standard deviations using coda are {approx}0.05 to 0.12, roughly a factor of 3 smaller than direct S-waves for 0.2 < f < 15.0 Hz. Also, direct wave analysis requires collocated pairs of earthquakes whereas the event-pairs (Green's function and target events) can be separated by {approx}25 km for coda amplitudes without any appreciable degradation. We then apply coda spectral ratio method to the 1999 Hector Mine mainshock (M{sub w} 7.0, Mojave Desert) and its larger aftershocks. We observe a clear departure from self-similarity, consistent with previous studies using similar regional datasets.« less
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.
High temperature flow-through device for rapid solubilization and analysis
West, Jason A. A. [Castro Valley, CA; Hukari, Kyle W [San Ramon, CA; Patel, Kamlesh D [Dublin, CA; Peterson, Kenneth A [Albuquerque, NM; Renzi, Ronald F [Tracy, CA
2009-09-22
Devices and methods for thermally lysing of biological material, for example vegetative bacterial cells and bacterial spores, are provided. Hot solution methods for solubilizing bacterial spores are described. Systems for direct analysis are disclosed including thermal lysers coupled to sample preparation stations. Integrated systems capable of performing sample lysis, labeling and protein fingerprint analysis of biological material, for example, vegetative bacterial cells, bacterial spores and viruses are provided.
High temperature flow-through device for rapid solubilization and analysis
West, Jason A. A.; Hukari, Kyle W.; Patel, Kamlesh D.; Peterson, Kenneth A.; Renzi, Ronald F.
2013-04-23
Devices and methods for thermally lysing of biological material, for example vegetative bacterial cells and bacterial spores, are provided. Hot solution methods for solubilizing bacterial spores are described. Systems for direct analysis are disclosed including thermal lysers coupled to sample preparation stations. Integrated systems capable of performing sample lysis, labeling and protein fingerprint analysis of biological material, for example, vegetative bacterial cells, bacterial spores and viruses are provided.
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Analysis of Big Data in Gait Biomechanics: Current Trends and Future Directions.
Phinyomark, Angkoon; Petri, Giovanni; Ibáñez-Marcelo, Esther; Osis, Sean T; Ferber, Reed
2018-01-01
The increasing amount of data in biomechanics research has greatly increased the importance of developing advanced multivariate analysis and machine learning techniques, which are better able to handle "big data". Consequently, advances in data science methods will expand the knowledge for testing new hypotheses about biomechanical risk factors associated with walking and running gait-related musculoskeletal injury. This paper begins with a brief introduction to an automated three-dimensional (3D) biomechanical gait data collection system: 3D GAIT, followed by how the studies in the field of gait biomechanics fit the quantities in the 5 V's definition of big data: volume, velocity, variety, veracity, and value. Next, we provide a review of recent research and development in multivariate and machine learning methods-based gait analysis that can be applied to big data analytics. These modern biomechanical gait analysis methods include several main modules such as initial input features, dimensionality reduction (feature selection and extraction), and learning algorithms (classification and clustering). Finally, a promising big data exploration tool called "topological data analysis" and directions for future research are outlined and discussed.
Parallel-vector solution of large-scale structural analysis problems on supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1989-01-01
A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.
USDA-ARS?s Scientific Manuscript database
We developed a rapid method with ultra-performance liquid chromatography – tandem mass spectrometry (UPLC-MS/MS) for the qualitative and quantitative analysis of plant proanthocyanidins (PAs) directly from crude plant extracts. The method utilizes a range of cone voltages to achieve the depolymeriza...
ERIC Educational Resources Information Center
Hong, Guanglei; Deutsch, Jonah; Hill, Heather D.
2015-01-01
Conventional methods for mediation analysis generate biased results when the mediator-outcome relationship depends on the treatment condition. This article shows how the ratio-of-mediator-probability weighting (RMPW) method can be used to decompose total effects into natural direct and indirect effects in the presence of treatment-by-mediator…
ERIC Educational Resources Information Center
Hong, Guanglei; Deutsch, Jonah; Hill, Heather D.
2015-01-01
Conventional methods for mediation analysis generate biased results when the mediator--outcome relationship depends on the treatment condition. This article shows how the ratio-of-mediator-probability weighting (RMPW) method can be used to decompose total effects into natural direct and indirect effects in the presence of treatment-by-mediator…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokorny, M.; Rebicek, J.; Klemes, J.
2015-10-15
This paper presents a rapid non-destructive method that provides information on the anisotropic internal structure of nanofibrous layers. A laser beam of a wavelength of 632.8 nm is directed at and passes through a nanofibrous layer prepared by electrostatic spinning. Information about the structural arrangement of nanofibers in the layer is directly visible in the form of a diffraction image formed on a projection screen or obtained from measured intensities of the laser beam passing through the sample which are determined by the dependency of the angle of the main direction of polarization of the laser beam on the axismore » of alignment of nanofibers in the sample. Both optical methods were verified on Polyvinyl alcohol (PVA) nanofibrous layers (fiber diameter of 470 nm) with random, single-axis aligned and crossed structures. The obtained results match the results of commonly used methods which apply the analysis of electron microscope images. The presented simple method not only allows samples to be analysed much more rapidly and without damaging them but it also makes possible the analysis of much larger areas, up to several square millimetres, at the same time.« less
Podshivalov, L; Fischer, A; Bar-Yoseph, P Z
2011-04-01
This paper describes a new alternative for individualized mechanical analysis of bone trabecular structure. This new method closes the gap between the classic homogenization approach that is applied to macro-scale models and the modern micro-finite element method that is applied directly to micro-scale high-resolution models. The method is based on multiresolution geometrical modeling that generates intermediate structural levels. A new method for estimating multiscale material properties has also been developed to facilitate reliable and efficient mechanical analysis. What makes this method unique is that it enables direct and interactive analysis of the model at every intermediate level. Such flexibility is of principal importance in the analysis of trabecular porous structure. The method enables physicians to zoom-in dynamically and focus on the volume of interest (VOI), thus paving the way for a large class of investigations into the mechanical behavior of bone structure. This is one of the very few methods in the field of computational bio-mechanics that applies mechanical analysis adaptively on large-scale high resolution models. The proposed computational multiscale FE method can serve as an infrastructure for a future comprehensive computerized system for diagnosis of bone structures. The aim of such a system is to assist physicians in diagnosis, prognosis, drug treatment simulation and monitoring. Such a system can provide a better understanding of the disease, and hence benefit patients by providing better and more individualized treatment and high quality healthcare. In this paper, we demonstrate the feasibility of our method on a high-resolution model of vertebra L3. Copyright © 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bakhshi Khaniki, Hossein; Rajasekaran, Sundaramoorthy
2018-05-01
This study develops a comprehensive investigation on mechanical behavior of non-uniform bi-directional functionally graded beam sensors in the framework of modified couple stress theory. Material variation is modelled through both length and thickness directions using power-law, sigmoid and exponential functions. Moreover, beam is assumed with linear, exponential and parabolic cross-section variation through the length using power-law and sigmoid varying functions. Using these assumptions, a general model for microbeams is presented and formulated by employing Hamilton’s principle. Governing equations are solved using a mixed finite element method with Lagrangian interpolation technique, Gaussian quadrature method and Wilson’s Lagrangian multiplier method. It is shown that by using bi-directional functionally graded materials in nonuniform microbeams, mechanical behavior of such structures could be affected noticeably and scale parameter has a significant effect in changing the rigidity of nonuniform bi-directional functionally graded beams.
Using direct mail to promote organ donor registration: Two campaigns and a meta-analysis.
Feeley, Thomas H; Quick, Brian L; Lee, Seyoung
2016-12-01
Two direct mail campaigns were undertaken in Rochester and Buffalo, New York, with the goal of enrolling adults aged 50-64 years into the state organ and tissue donation electronic registry. Meta-analytic methods were used to summarize the body of research on the effects of direct mail marketing to promote organ donation registration. In the first study, 40 000 mailers were sent to targeted adults in Rochester, New York, and varied by brochure-only, letter-only, and letter plus brochure mailing conditions. A follow-up mailer using letter-only was sent to 20 000 individuals in Buffalo, New York area. In a second study, campaign results were combined with previously published direct mail campaigns in a random-effects meta-analysis. The overall registration rates were 1.6% and 4.6% for the Rochester and Buffalo campaigns, and the letter-only condition outperformed the brochure-only and letter plus brochure conditions in the Rochester area campaigns. Meta-analysis indicated a 3.3% registration rates across 15 campaigns and 329 137 targeted individuals. Registration rates were higher when targeting 18-year-olds and when direct mail letters were authored by officials affiliated with state departments. Use of direct mail to promote organ donor registration is an inexpensive method to increase enrollments in state registries. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A Molecular Dynamic Modeling of Hemoglobin-Hemoglobin Interactions
NASA Astrophysics Data System (ADS)
Wu, Tao; Yang, Ye; Sheldon Wang, X.; Cohen, Barry; Ge, Hongya
2010-05-01
In this paper, we present a study of hemoglobin-hemoglobin interaction with model reduction methods. We begin with a simple spring-mass system with given parameters (mass and stiffness). With this known system, we compare the mode superposition method with Singular Value Decomposition (SVD) based Principal Component Analysis (PCA). Through PCA we are able to recover the principal direction of this system, namely the model direction. This model direction will be matched with the eigenvector derived from mode superposition analysis. The same technique will be implemented in a much more complicated hemoglobin-hemoglobin molecule interaction model, in which thousands of atoms in hemoglobin molecules are coupled with tens of thousands of T3 water molecule models. In this model, complex inter-atomic and inter-molecular potentials are replaced by nonlinear springs. We employ the same method to get the most significant modes and their frequencies of this complex dynamical system. More complex physical phenomena can then be further studied by these coarse grained models.
Palm vein recognition based on directional empirical mode decomposition
NASA Astrophysics Data System (ADS)
Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei
2014-04-01
Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.
Two-dimensional fracture analysis of piezoelectric material based on the scaled boundary node method
NASA Astrophysics Data System (ADS)
Shen-Shen, Chen; Juan, Wang; Qing-Hua, Li
2016-04-01
A scaled boundary node method (SBNM) is developed for two-dimensional fracture analysis of piezoelectric material, which allows the stress and electric displacement intensity factors to be calculated directly and accurately. As a boundary-type meshless method, the SBNM employs the moving Kriging (MK) interpolation technique to an approximate unknown field in the circumferential direction and therefore only a set of scattered nodes are required to discretize the boundary. As the shape functions satisfy Kronecker delta property, no special techniques are required to impose the essential boundary conditions. In the radial direction, the SBNM seeks analytical solutions by making use of analytical techniques available to solve ordinary differential equations. Numerical examples are investigated and satisfactory solutions are obtained, which validates the accuracy and simplicity of the proposed approach. Project supported by the National Natural Science Foundation of China (Grant Nos. 11462006 and 21466012), the Foundation of Jiangxi Provincial Educational Committee, China (Grant No. KJLD14041), and the Foundation of East China Jiaotong University, China (Grant No. 09130020).
Database for LDV Signal Processor Performance Analysis
NASA Technical Reports Server (NTRS)
Baker, Glenn D.; Murphy, R. Jay; Meyers, James F.
1989-01-01
A comparative and quantitative analysis of various laser velocimeter signal processors is difficult because standards for characterizing signal bursts have not been established. This leaves the researcher to select a signal processor based only on manufacturers' claims without the benefit of direct comparison. The present paper proposes the use of a database of digitized signal bursts obtained from a laser velocimeter under various configurations as a method for directly comparing signal processors.
Global Qualitative Flow-Path Modeling for Local State Determination in Simulation and Analysis
NASA Technical Reports Server (NTRS)
Malin, Jane T. (Inventor); Fleming, Land D. (Inventor)
1998-01-01
For qualitative modeling and analysis, a general qualitative abstraction of power transmission variables (flow and effort) for elements of flow paths includes information on resistance, net flow, permissible directions of flow, and qualitative potential is discussed. Each type of component model has flow-related variables and an associated internal flow map, connected into an overall flow network of the system. For storage devices, the implicit power transfer to the environment is represented by "virtual" circuits that include an environmental junction. A heterogeneous aggregation method simplifies the path structure. A method determines global flow-path changes during dynamic simulation and analysis, and identifies corresponding local flow state changes that are effects of global configuration changes. Flow-path determination is triggered by any change in a flow-related device variable in a simulation or analysis. Components (path elements) that may be affected are identified, and flow-related attributes favoring flow in the two possible directions are collected for each of them. Next, flow-related attributes are determined for each affected path element, based on possibly conflicting indications of flow direction. Spurious qualitative ambiguities are minimized by using relative magnitudes and permissible directions of flow, and by favoring flow sources over effort sources when comparing flow tendencies. The results are output to local flow states of affected components.
Direct Detection of Biotinylated Proteins by Mass Spectrometry
2015-01-01
Mass spectrometric strategies to identify protein subpopulations involved in specific biological functions rely on covalently tagging biotin to proteins using various chemical modification methods. The biotin tag is primarily used for enrichment of the targeted subpopulation for subsequent mass spectrometry (MS) analysis. A limitation of these strategies is that MS analysis does not easily discriminate unlabeled contaminants from the labeled protein subpopulation under study. To solve this problem, we developed a flexible method that only relies on direct MS detection of biotin-tagged proteins called “Direct Detection of Biotin-containing Tags” (DiDBiT). Compared with conventional targeted proteomic strategies, DiDBiT improves direct detection of biotinylated proteins ∼200 fold. We show that DiDBiT is applicable to several protein labeling protocols in cell culture and in vivo using cell permeable NHS-biotin and incorporation of the noncanonical amino acid, azidohomoalanine (AHA), into newly synthesized proteins, followed by click chemistry tagging with biotin. We demonstrate that DiDBiT improves the direct detection of biotin-tagged newly synthesized peptides more than 20-fold compared to conventional methods. With the increased sensitivity afforded by DiDBiT, we demonstrate the MS detection of newly synthesized proteins labeled in vivo in the rodent nervous system with unprecedented temporal resolution as short as 3 h. PMID:25117199
2018-01-01
Signaling pathways represent parts of the global biological molecular network which connects them into a seamless whole through complex direct and indirect (hidden) crosstalk whose structure can change during development or in pathological conditions. We suggest a novel methodology, called Googlomics, for the structural analysis of directed biological networks using spectral analysis of their Google matrices, using parallels with quantum scattering theory, developed for nuclear and mesoscopic physics and quantum chaos. We introduce analytical “reduced Google matrix” method for the analysis of biological network structure. The method allows inferring hidden causal relations between the members of a signaling pathway or a functionally related group of genes. We investigate how the structure of hidden causal relations can be reprogrammed as a result of changes in the transcriptional network layer during cancerogenesis. The suggested Googlomics approach rigorously characterizes complex systemic changes in the wiring of large causal biological networks in a computationally efficient way. PMID:29370181
Chen, Hai-Hua; Yang, Ji-Long; Lu, Hui-Fang; Zhou, Wei-Jun; Yao, Fei; Deng, Lan
2014-02-01
This study was purposed to investigate the feasibility of high resolution melting (HRM) in the detection of JAK2V617F mutation in patients with myeloproliferative neoplasm (MPN). The 29 marrow samples randomly selected from patients with clinically diagnosed MPN from January 2008 to January 2011 were detected by HRM method. The results of HRM analysis were compared with that detected by allele specific polymerase chain reaction (AS-PCR) and DNA direct sequencing. The results showed that the JAK2V617F mutations were detected in 11 (37.9%, 11/29) cases by HRM, and its comparability with the direct sequencing result was 100%. While the consistency of AS-PCR with the direct sequencing was moderate (Kappa = 0.179, P = 0.316). It is concluded that the HRM analysis may be an optimal method for clinical screening of JAK2V617F mutation due to its simplicity and promptness with a high specificity.
Peng, Xin; Yu, Ke-Qiang; Deng, Guan-Hua; Jiang, Yun-Xia; Wang, Yu; Zhang, Guo-Xia; Zhou, Hong-Wei
2013-12-01
Low cost and high throughput capacity are major advantages of using next generation sequencing (NGS) techniques to determine metagenomic 16S rRNA tag sequences. These methods have significantly changed our view of microorganisms in the fields of human health and environmental science. However, DNA extraction using commercial kits has shortcomings of high cost and time constraint. In the present study, we evaluated the determination of fecal microbiomes using a direct boiling method compared with 5 different commercial extraction methods, e.g., Qiagen and MO BIO kits. Principal coordinate analysis (PCoA) using UniFrac distances and clustering showed that direct boiling of a wide range of feces concentrations gave a similar pattern of bacterial communities as those obtained from most of the commercial kits, with the exception of the MO BIO method. Fecal concentration by boiling method affected the estimation of α-diversity indices, otherwise results were generally comparable between boiling and commercial methods. The operational taxonomic units (OTUs) determined through direct boiling showed highly consistent frequencies with those determined through most of the commercial methods. Even those for the MO BIO kit were also obtained by the direct boiling method with high confidence. The present study suggested that direct boiling could be used to determine the fecal microbiome and using this method would significantly reduce the cost and improve the efficiency of the sample preparation for studying gut microbiome diversity. © 2013 Elsevier B.V. All rights reserved.
Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.
Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei
2017-04-01
Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.
NASA Astrophysics Data System (ADS)
Schmelzbach, C.; Sollberger, D.; Greenhalgh, S.; Van Renterghem, C.; Robertsson, J. O. A.
2017-12-01
Polarization analysis of standard three-component (3C) seismic data is an established tool to determine the propagation directions of seismic waves recorded by a single station. A major limitation of seismic direction finding methods using 3C recordings, however, is that a correct propagation-direction determination is only possible if the wave mode is known. Furthermore, 3C polarization analysis techniques break down in the presence of coherent noise (i.e., when more than one event is present in the analysis time window). Recent advances in sensor technology (e.g., fibre-optical, magnetohydrodynamic angular rate sensors, and ring laser gyroscopes) have made it possible to accurately measure all three components of rotational ground motion exhibited by seismic waves, in addition to the conventionally recorded three components of translational motion. Here, we present an extension of the theory of single station 3C polarization analysis to six-component (6C) recordings of collocated translational and rotational ground motions. We demonstrate that the information contained in rotation measurements can help to overcome some of the main limitations of standard 3C seismic direction finding, such as handling multiple arrivals simultaneously. We show that the 6C polarisation of elastic waves measured at the Earth's free surface does not only depend on the seismic wave type and propagation direction, but also on the local P- and S-wave velocities just beneath the recording station. Using an adaptation of the multiple signal classification algorithm (MUSIC), we demonstrate how seismic events can univocally be identified and characterized in terms of their wave type. Furthermore, we show how the local velocities can be inferred from single-station 6C data, in addition to the direction angles (inclination and azimuth) of seismic arrivals. A major benefit of our proposed 6C method is that it also allows the accurate recovery of the wave type, propagation directions, and phase velocities of multiple, interfering arrivals in one time window. We demonstrate how this property can be exploited to separate the wavefield into its elastic wave-modes and to isolate or suppress waves arriving from specific directions (directional filtering), both in a fully automated fashion.
Macrosegregation in aluminum alloy ingot cast by the semicontinuous direct chill method
NASA Technical Reports Server (NTRS)
Yu, H.; Granger, D. A.
1984-01-01
A theoretical model of the semicontinuous DC casting method is developed to predict the positive segregation observed at the subsurface and the negative segregation commonly found at the center of large commercial-size aluminum alloy ingot. Qualitative analysis of commercial-size aluminum alloy semicontinuous cast direct chill (DC) ingot is carried out. In the analysis, both positive segregation in the ingot subsurface and negative segregation at the center of the ingot are examined. Ingot subsurface macrosegregation is investigated by considering steady state casting of a circular cross-section binary alloy ingot. Nonequilibrium solidification is assumed with no solid diffusion, constant equilibrium partition ratio, and constant solid density.
1984-09-17
hole at an angle to the radial direction. No 6t.ress intensity factors were developed for a non -radial crack. To circumvent non -radial growth, for which...Structural Lugs 10 6.00 TETM TESX, MARI LOCKHEED L.0 GRUP IIhA AND 2Rii * 2~~~~.0 .RUPINI .01 .05 1 .2 .5. 9 99PROABLTY F*ý,r 1-40 4oprsno R ato nTs rga... controlled and systematically varied. In the fifth column of the table it is shown whether or not the pin is lubricated during testing. Loading directions
NASA Astrophysics Data System (ADS)
D'Souza, Adora M.; Abidin, Anas Zainul; Nagarajan, Mahesh B.; Wismüller, Axel
2016-03-01
We investigate the applicability of a computational framework, called mutual connectivity analysis (MCA), for directed functional connectivity analysis in both synthetic and resting-state functional MRI data. This framework comprises of first evaluating non-linear cross-predictability between every pair of time series prior to recovering the underlying network structure using community detection algorithms. We obtain the non-linear cross-prediction score between time series using Generalized Radial Basis Functions (GRBF) neural networks. These cross-prediction scores characterize the underlying functionally connected networks within the resting brain, which can be extracted using non-metric clustering approaches, such as the Louvain method. We first test our approach on synthetic models with known directional influence and network structure. Our method is able to capture the directional relationships between time series (with an area under the ROC curve = 0.92 +/- 0.037) as well as the underlying network structure (Rand index = 0.87 +/- 0.063) with high accuracy. Furthermore, we test this method for network recovery on resting-state fMRI data, where results are compared to the motor cortex network recovered from a motor stimulation sequence, resulting in a strong agreement between the two (Dice coefficient = 0.45). We conclude that our MCA approach is effective in analyzing non-linear directed functional connectivity and in revealing underlying functional network structure in complex systems.
DSouza, Adora M; Abidin, Anas Zainul; Nagarajan, Mahesh B; Wismüller, Axel
2016-03-29
We investigate the applicability of a computational framework, called mutual connectivity analysis (MCA), for directed functional connectivity analysis in both synthetic and resting-state functional MRI data. This framework comprises of first evaluating non-linear cross-predictability between every pair of time series prior to recovering the underlying network structure using community detection algorithms. We obtain the non-linear cross-prediction score between time series using Generalized Radial Basis Functions (GRBF) neural networks. These cross-prediction scores characterize the underlying functionally connected networks within the resting brain, which can be extracted using non-metric clustering approaches, such as the Louvain method. We first test our approach on synthetic models with known directional influence and network structure. Our method is able to capture the directional relationships between time series (with an area under the ROC curve = 0.92 ± 0.037) as well as the underlying network structure (Rand index = 0.87 ± 0.063) with high accuracy. Furthermore, we test this method for network recovery on resting-state fMRI data, where results are compared to the motor cortex network recovered from a motor stimulation sequence, resulting in a strong agreement between the two (Dice coefficient = 0.45). We conclude that our MCA approach is effective in analyzing non-linear directed functional connectivity and in revealing underlying functional network structure in complex systems.
Yonetani, Shota; Ohnishi, Hiroaki; Ohkusu, Kiyofumi; Matsumoto, Tetsuya; Watanabe, Takashi
2016-11-01
Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) is a fast and reliable method for the identification of bacteria. A MALDI Sepsityper kit is generally used to prepare samples obtained directly from culture bottles. However, the relatively high cost of this kit is a major obstacle to introducing this method into routine clinical use. In this study, the accuracies of three different preparation methods for rapid direct identification of bacteria from positive blood culture bottles by MALDI-TOF MS analysis were compared. In total, 195 positive bottles were included in this study. Overall, 78.5%, 68.7%, and 76.4% of bacteria were correctly identified to the genus level (score ≥1.7) directly from positive blood cultures using the Sepsityper, centrifugation, and saponin methods, respectively. The identification rates using the Sepsityper and saponin methods were significantly higher than that using the centrifugation method (Sepsityper vs. centrifugation, p<0.001; saponin vs. centrifugation, p=0.003). These results suggest that the saponin method is superior to the centrifugation method and comparable to the Sepsityper method in the accuracy of rapid bacterial identification directly from blood culture bottles, and could be a less expensive alternative to the Sepsityper method. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Directional dual-tree rational-dilation complex wavelet transform.
Serbes, Gorkem; Gulcur, Halil Ozcan; Aydin, Nizamettin
2014-01-01
Dyadic discrete wavelet transform (DWT) has been used successfully in processing signals having non-oscillatory transient behaviour. However, due to the low Q-factor property of their wavelet atoms, the dyadic DWT is less effective in processing oscillatory signals such as embolic signals (ESs). ESs are extracted from quadrature Doppler signals, which are the output of Doppler ultrasound systems. In order to process ESs, firstly, a pre-processing operation known as phase filtering for obtaining directional signals from quadrature Doppler signals must be employed. Only then, wavelet based methods can be applied to these directional signals for further analysis. In this study, a directional dual-tree rational-dilation complex wavelet transform, which can be applied directly to quadrature signals and has the ability of extracting directional information during analysis, is introduced.
Teh, Sue-Siang; Morlock, Gertrud E
2015-11-15
Cold-pressed hemp, flax and canola seed oils are healthy oils for human consumption as these are rich in polyunsaturated fatty acids and bioactive phytochemicals. However, bioactive information on the food intake side is mainly focused on target analysis. For more comprehensive information with regard to effects, single bioactive compounds present in the seed oil extracts were detected by effect-directed assays, like bioassays or an enzymatic assay, directly linked with chromatography and further characterized by mass spectrometry. This effect-directed analysis is a streamlined method for the analysis of bioactive compounds in the seed oil extracts. All effective compounds with regard to the five assays or bioassays applied were detected in the samples, meaning also bioactive breakdown products caused during oil processing, residues or contaminants, aside the naturally present bioactive phytochemicals. The investigated cold-pressed oils contained compounds that exert antioxidative, antimicrobial, acetylcholinesterase inhibitory and estrogenic activities. This effect-directed analysis can be recommended for bioactivity profiling of food to obtain profound effect-directed information on the food intake side. Copyright © 2015 Elsevier Ltd. All rights reserved.
Woo, Nain; Kim, Su-Kang; Sun, Yucheng; Kang, Seong Ho
2018-01-01
Human apolipoprotein E (ApoE) is associated with high cholesterol levels, coronary artery disease, and especially Alzheimer's disease. In this study, we developed an ApoE genotyping and one-step multiplex polymerase chain reaction (PCR) based-capillary electrophoresis (CE) method for the enhanced diagnosis of Alzheimer's. The primer mixture of ApoE genes enabled the performance of direct one-step multiplex PCR from whole blood without DNA purification. The combination of direct ApoE genotyping and one-step multiplex PCR minimized the risk of DNA loss or contamination due to the process of DNA purification. All amplified PCR products with different DNA lengths (112-, 253-, 308-, 444-, and 514-bp DNA) of the ApoE genes were analyzed within 2min by an extended voltage programming (VP)-based CE under the optimal conditions. The extended VP-based CE method was at least 120-180 times faster than conventional slab gel electrophoresis methods In particular, all amplified DNA fragments were detected in less than 10 PCR cycles using a laser-induced fluorescence detector. The detection limits of the ApoE genes were 6.4-62.0pM, which were approximately 100-100,000 times more sensitive than previous Alzheimer's diagnosis methods In addition, the combined one-step multiplex PCR and extended VP-based CE method was also successfully applied to the analysis of ApoE genotypes in Alzheimer's patients and normal samples and confirmed the distribution probability of allele frequencies. This combination of direct one-step multiplex PCR and an extended VP-based CE method should increase the diagnostic reliability of Alzheimer's with high sensitivity and short analysis time even with direct use of whole blood. Copyright © 2017 Elsevier B.V. All rights reserved.
A direct method for nonlinear ill-posed problems
NASA Astrophysics Data System (ADS)
Lakhal, A.
2018-02-01
We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.
Chiba, Yasutaka
2014-01-01
Questions of mediation are often of interest in reasoning about mechanisms, and methods have been developed to address these questions. However, these methods make strong assumptions about the absence of confounding. Even if exposure is randomized, there may be mediator-outcome confounding variables. Inference about direct and indirect effects is particularly challenging if these mediator-outcome confounders are affected by the exposure because in this case these effects are not identified irrespective of whether data is available on these exposure-induced mediator-outcome confounders. In this paper, we provide a sensitivity analysis technique for natural direct and indirect effects that is applicable even if there are mediator-outcome confounders affected by the exposure. We give techniques for both the difference and risk ratio scales and compare the technique to other possible approaches. PMID:25580387
NEAT: an efficient network enrichment analysis test.
Signorelli, Mirko; Vinciotti, Veronica; Wit, Ernst C
2016-09-05
Network enrichment analysis is a powerful method, which allows to integrate gene enrichment analysis with the information on relationships between genes that is provided by gene networks. Existing tests for network enrichment analysis deal only with undirected networks, they can be computationally slow and are based on normality assumptions. We propose NEAT, a test for network enrichment analysis. The test is based on the hypergeometric distribution, which naturally arises as the null distribution in this context. NEAT can be applied not only to undirected, but to directed and partially directed networks as well. Our simulations indicate that NEAT is considerably faster than alternative resampling-based methods, and that its capacity to detect enrichments is at least as good as the one of alternative tests. We discuss applications of NEAT to network analyses in yeast by testing for enrichment of the Environmental Stress Response target gene set with GO Slim and KEGG functional gene sets, and also by inspecting associations between functional sets themselves. NEAT is a flexible and efficient test for network enrichment analysis that aims to overcome some limitations of existing resampling-based tests. The method is implemented in the R package neat, which can be freely downloaded from CRAN ( https://cran.r-project.org/package=neat ).
NASA Astrophysics Data System (ADS)
Vasilevsky, A. M.; Konoplev, G. A.; Stepanova, O. S.; Toropov, D. K.; Zagorsky, A. L.
2016-04-01
A novel direct spectrophotometric method for quantitative determination of Oxiphore® drug substance (synthetic polyhydroquinone complex) in food supplements is developed. Absorption spectra of Oxiphore® water solutions in the ultraviolet region are presented. Samples preparation procedures and mathematical methods of spectra post-analytical procession are discussed. Basic characteristics of the automatic CCD-based UV spectrophotometer and special software implementing the developed method are described. The results of the trials of the developed method and software are analyzed: the error of determination for Oxiphore® concentration in water solutions of the isolated substance and singlecomponent food supplements did not exceed 15% (average error was 7…10%).
Bushon, R.N.; Brady, A.M.; Likirdopulos, C.A.; Cireddu, J.V.
2009-01-01
Aims: The aim of this study was to examine a rapid method for detecting Escherichia coli and enterococci in recreational water. Methods and Results: Water samples were assayed for E. coli and enterococci by traditional and immunomagnetic separation/adenosine triphosphate (IMS/ATP) methods. Three sample treatments were evaluated for the IMS/ATP method: double filtration, single filtration, and direct analysis. Pearson's correlation analysis showed strong, significant, linear relations between IMS/ATP and traditional methods for all sample treatments; strongest linear correlations were with the direct analysis (r = 0.62 and 0.77 for E. coli and enterococci, respectively). Additionally, simple linear regression was used to estimate bacteria concentrations as a function of IMS/ATP results. The correct classification of water-quality criteria was 67% for E. coli and 80% for enterococci. Conclusions: The IMS/ATP method is a viable alternative to traditional methods for faecal-indicator bacteria. Significance and Impact of the Study: The IMS/ATP method addresses critical public health needs for the rapid detection of faecal-indicator contamination and has potential for satisfying US legislative mandates requiring methods to detect bathing water contamination in 2 h or less. Moreover, IMS/ATP equipment is considerably less costly and more portable than that for molecular methods, making the method suitable for field applications. ?? 2009 The Authors.
Qian, Zhi-Ming; Wang, Shuo Hong; Cheng, Xi En; Chen, Yan Qiu
2016-06-23
Fish tracking is an important step for video based analysis of fish behavior. Due to severe body deformation and mutual occlusion of multiple swimming fish, accurate and robust fish tracking from video image sequence is a highly challenging problem. The current tracking methods based on motion information are not accurate and robust enough to track the waving body and handle occlusion. In order to better overcome these problems, we propose a multiple fish tracking method based on fish head detection. The shape and gray scale characteristics of the fish image are employed to locate the fish head position. For each detected fish head, we utilize the gray distribution of the head region to estimate the fish head direction. Both the position and direction information from fish detection are then combined to build a cost function of fish swimming. Based on the cost function, global optimization method can be applied to associate the target between consecutive frames. Results show that our method can accurately detect the position and direction information of fish head, and has a good tracking performance for dozens of fish. The proposed method can successfully obtain the motion trajectories for dozens of fish so as to provide more precise data to accommodate systematic analysis of fish behavior.
Functional Extended Redundancy Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun; Suk, Hye Won; Lee, Jang-Han; Moskowitz, D. S.; Lim, Jooseop
2012-01-01
We propose a functional version of extended redundancy analysis that examines directional relationships among several sets of multivariate variables. As in extended redundancy analysis, the proposed method posits that a weighed composite of each set of exogenous variables influences a set of endogenous variables. It further considers endogenous…
Efficient multiscale magnetic-domain analysis of iron-core material under mechanical stress
NASA Astrophysics Data System (ADS)
Nishikubo, Atsushi; Ito, Shumpei; Mifune, Takeshi; Matsuo, Tetsuji; Kaido, Chikara; Takahashi, Yasuhito; Fujiwara, Koji
2018-05-01
For an efficient analysis of magnetization, a partial-implicit solution method is improved using an assembled domain structure model with six-domain mesoscopic particles exhibiting pinning-type hysteresis. The quantitative analysis of non-oriented silicon steel succeeds in predicting the stress dependence of hysteresis loss with computation times greatly reduced by using the improved partial-implicit method. The effect of cell division along the thickness direction is also evaluated.
Dynamic analysis for shuttle design verification
NASA Technical Reports Server (NTRS)
Fralich, R. W.; Green, C. E.; Rheinfurth, M. H.
1972-01-01
Two approaches that are used for determining the modes and frequencies of space shuttle structures are discussed. The first method, direct numerical analysis, involves finite element mathematical modeling of the space shuttle structure in order to use computer programs for dynamic structural analysis. The second method utilizes modal-coupling techniques of experimental verification made by vibrating only spacecraft components and by deducing modes and frequencies of the complete vehicle from results obtained in the component tests.
Glavatskiĭ, A Ia; Guzhovskaia, N V; Lysenko, S N; Kulik, A V
2005-12-01
The authors proposed a possible preoperative diagnostics of the degree of supratentorial brain gliom anaplasia using statistical analysis methods. It relies on a complex examination of 934 patients with I-IV degree anaplasias, which had been treated in the Institute of Neurosurgery from 1990 to 2004. The use of statistical analysis methods for differential diagnostics of the degree of brain gliom anaplasia may optimize a diagnostic algorithm, increase reliability of obtained data and in some cases avoid carrying out irrational operative intrusions. Clinically important signs for the use of statistical analysis methods directed to preoperative diagnostics of brain gliom anaplasia have been defined
Langley, Robin S; Cotoni, Vincent
2010-04-01
Large sections of many types of engineering construction can be considered to constitute a two-dimensional periodic structure, with examples ranging from an orthogonally stiffened shell to a honeycomb sandwich panel. In this paper, a method is presented for computing the boundary (or edge) impedance of a semi-infinite two-dimensional periodic structure, a quantity which is referred to as the direct field boundary impedance matrix. This terminology arises from the fact that none of the waves generated at the boundary (the direct field) are reflected back to the boundary in a semi-infinite system. The direct field impedance matrix can be used to calculate elastic wave transmission coefficients, and also to calculate the coupling loss factors (CLFs), which are required by the statistical energy analysis (SEA) approach to predicting high frequency vibration levels in built-up systems. The calculation of the relevant CLFs enables a two-dimensional periodic region of a structure to be modeled very efficiently as a single subsystem within SEA, and also within related methods, such as a recently developed hybrid approach, which couples the finite element method with SEA. The analysis is illustrated by various numerical examples involving stiffened plate structures.
Zhang, Baile; Gao, Lihong; Xie, Yingshuang; Zhou, Wei; Chen, Xiaofeng; Lei, Chunni; Zhang, Huan
2017-07-08
A direct analysis in real time tandem mass spectrometry (DART-MS/MS) method was established for quickly screening five illegally added alkaloids of poppy shell from the hot pot condiment, beef noodle soup and seasoning. The samples were extracted and purified by acetonitrile, and then injected under the conditions of ionization temperature of 300℃, grid electrode voltage of 150 V and sampling rate of 0.8 mm/s using DART in the positive ion mode. The determination was conducted by tandem mass spectrometry in positive ESI mode under multiple reaction monitoring (MRM) mode. The method is simple and rapid, and can meet the requirement of rapid screening and analysis of large quantities of samples.
The Extraction of Terrace in the Loess Plateau Based on radial method
NASA Astrophysics Data System (ADS)
Liu, W.; Li, F.
2016-12-01
The terrace of Loess Plateau, as a typical kind of artificial landform and an important measure of soil and water conservation, its positioning and automatic extraction will simplify the work of land use investigation. The existing methods of terrace extraction mainly include visual interpretation and automatic extraction. The manual method is used in land use investigation, but it is time-consuming and laborious. Researchers put forward some automatic extraction methods. For example, Fourier transform method can recognize terrace and find accurate position from frequency domain image, but it is more affected by the linear objects in the same direction of terrace; Texture analysis method is simple and have a wide range application of image processing. The disadvantage of texture analysis method is unable to recognize terraces' edge; Object-oriented is a new method of image classification, but when introduce it to terrace extracting, fracture polygons will be the most serious problem and it is difficult to explain its geological meaning. In order to positioning the terraces, we use high- resolution remote sensing image to extract and analyze the gray value of the pixels which the radial went through. During the recognition process, we firstly use the DEM data analysis or by manual selecting, to roughly confirm the position of peak points; secondly, take each of the peak points as the center to make radials in all directions; finally, extracting the gray values of the pixels which the radials went through, and analyzing its changing characteristics to confirm whether the terrace exists. For the purpose of getting accurate position of terrace, terraces' discontinuity, extension direction, ridge width, image processing algorithm, remote sensing image illumination and other influence factors were fully considered when designing the algorithms.
Alternative industrial carbon emissions benchmark based on input-output analysis
NASA Astrophysics Data System (ADS)
Han, Mengyao; Ji, Xi
2016-12-01
Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.
Interior-Point Methods for Linear Programming: A Challenge to the Simplex Method
1988-07-01
subsequently found that the method was first proposed by Dikin in 1967 [6]. Search directions are generated by the same system (5). Any hint of quadratic...1982). Inexact Newton methods, SIAM Journal on Numerical Analysis 19, 400-408. [6] I. I. Dikin (1967). Iterative solution of problems of linear and
This is a sampling and analysis method for the determination of asbestos in air. Samples are analyzed by transmission electron microscopy (TEM). Although a small subset of samples are to be prepared using a direct procedure, the majority of samples analyzed using this method wil...
Laser-induced breakdown spectroscopy for specimen analysis
Kumar, Akshaya; Yu-Yueh, Fang; Burgess, Shane C.; Singh, Jagdish P.
2006-08-15
The present invention is directed to an apparatus, a system and a method for detecting the presence or absence of trace elements in a biological sample using Laser-Induced Breakdown Spectroscopy. The trace elements are used to develop a signature profile which is analyzed directly or compared with the known profile of a standard. In one aspect of the invention, the apparatus, system and method are used to detect malignant cancer cells in vivo.
2001-08-01
Benefit the Direct and Indirect Methods 16 Figure Figure 1: Military Exchanges’ Food Sales for Fiscal Year 1999 6 Abbreviations AAFES Army and Air Force...military installation. As the franchisee , the exchange service builds and operates the restaurants and directly employs and trains the personnel. In...they do not routinely develop a business case analysis, which would include weighing financial benefits with other factors, when determining which
Wavelet bases on the L-shaped domain
NASA Astrophysics Data System (ADS)
Jouini, Abdellatif; Lemarié-Rieusset, Pierre Gilles
2013-07-01
We present in this paper two elementary constructions of multiresolution analyses on the L-shaped domain D. In the first one, we shall describe a direct method to define an orthonormal multiresolution analysis. In the second one, we use the decomposition method for constructing a biorthogonal multiresolution analysis. These analyses are adapted for the study of the Sobolev spaces Hs(D)(s∈N).
Methods for magnetic resonance analysis using magic angle technique
Hu, Jian Zhi [Richland, WA; Wind, Robert A [Kennewick, WA; Minard, Kevin R [Kennewick, WA; Majors, Paul D [Kennewick, WA
2011-11-22
Methods of performing a magnetic resonance analysis of a biological object are disclosed that include placing the object in a main magnetic field (that has a static field direction) and in a radio frequency field; rotating the object at a frequency of less than about 100 Hz around an axis positioned at an angle of about 54.degree.44' relative to the main magnetic static field direction; pulsing the radio frequency to provide a sequence that includes a phase-corrected magic angle turning pulse segment; and collecting data generated by the pulsed radio frequency. In particular embodiments the method includes pulsing the radio frequency to provide at least two of a spatially selective read pulse, a spatially selective phase pulse, and a spatially selective storage pulse. Further disclosed methods provide pulse sequences that provide extended imaging capabilities, such as chemical shift imaging or multiple-voxel data acquisition.
NASA Astrophysics Data System (ADS)
Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.
2012-06-01
A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.
Direct magnetic field estimation based on echo planar raw data.
Testud, Frederik; Splitthoff, Daniel Nicolas; Speck, Oliver; Hennig, Jürgen; Zaitsev, Maxim
2010-07-01
Gradient recalled echo echo planar imaging is widely used in functional magnetic resonance imaging. The fast data acquisition is, however, very sensitive to field inhomogeneities which manifest themselves as artifacts in the images. Typically used correction methods have the common deficit that the data for the correction are acquired only once at the beginning of the experiment, assuming the field inhomogeneity distribution B(0) does not change over the course of the experiment. In this paper, methods to extract the magnetic field distribution from the acquired k-space data or from the reconstructed phase image of a gradient echo planar sequence are compared and extended. A common derivation for the presented approaches provides a solid theoretical basis, enables a fair comparison and demonstrates the equivalence of the k-space and the image phase based approaches. The image phase analysis is extended here to calculate the local gradient in the readout direction and improvements are introduced to the echo shift analysis, referred to here as "k-space filtering analysis." The described methods are compared to experimentally acquired B(0) maps in phantoms and in vivo. The k-space filtering analysis presented in this work demonstrated to be the most sensitive method to detect field inhomogeneities.
Direct and ultrasonic measurements of macroscopic piezoelectricity in sintered hydroxyapatite
NASA Astrophysics Data System (ADS)
Tofail, S. A. M.; Haverty, D.; Cox, F.; Erhart, J.; Hána, P.; Ryzhenko, V.
2009-03-01
Macroscopic piezoelectricity in hydroxyapatite (HA) ceramic was measured by a direct quasistatic method and an ultrasonic interference technique. The effective symmetry of polycrystalline aggregate was established and a detailed theoretical analysis was carried out to determine by these two methods the shear piezoelectric coefficient, d14, of HA. Piezoelectric nature of HA was proved qualitatively although a specific quantitative value for the d14 coefficient could not be established. Ultrasound method was also employed to anisotropic elastic constants, which agreed well with those measured from the first principles.
Anwar, A R; Muthalib, M; Perrey, S; Galka, A; Granert, O; Wolff, S; Deuschl, G; Raethjen, J; Heute, U; Muthuraman, M
2012-01-01
Directionality analysis of signals originating from different parts of brain during motor tasks has gained a lot of interest. Since brain activity can be recorded over time, methods of time series analysis can be applied to medical time series as well. Granger Causality is a method to find a causal relationship between time series. Such causality can be referred to as a directional connection and is not necessarily bidirectional. The aim of this study is to differentiate between different motor tasks on the basis of activation maps and also to understand the nature of connections present between different parts of the brain. In this paper, three different motor tasks (finger tapping, simple finger sequencing, and complex finger sequencing) are analyzed. Time series for each task were extracted from functional magnetic resonance imaging (fMRI) data, which have a very good spatial resolution and can look into the sub-cortical regions of the brain. Activation maps based on fMRI images show that, in case of complex finger sequencing, most parts of the brain are active, unlike finger tapping during which only limited regions show activity. Directionality analysis on time series extracted from contralateral motor cortex (CMC), supplementary motor area (SMA), and cerebellum (CER) show bidirectional connections between these parts of the brain. In case of simple finger sequencing and complex finger sequencing, the strongest connections originate from SMA and CMC, while connections originating from CER in either direction are the weakest ones in magnitude during all paradigms.
Development of indirect EFBEM for radiating noise analysis including underwater problems
NASA Astrophysics Data System (ADS)
Kwon, Hyun-Wung; Hong, Suk-Yoon; Song, Jee-Hun
2013-09-01
For the analysis of radiating noise problems in medium-to-high frequency ranges, the Energy Flow Boundary Element Method (EFBEM) was developed. EFBEM is the analysis technique that applies the Boundary Element Method (BEM) to Energy Flow Analysis (EFA). The fundamental solutions representing spherical wave property for radiating noise problems in open field and considering the free surface effect in underwater are developed. Also the directivity factor is developed to express wave's directivity patterns in medium-to-high frequency ranges. Indirect EFBEM by using fundamental solutions and fictitious source was applied to open field and underwater noise problems successfully. Through numerical applications, the acoustic energy density distributions due to vibration of a simple plate model and a sphere model were compared with those of commercial code, and the comparison showed good agreement in the level and pattern of the energy density distributions.
Vessel Enhancement and Segmentation of 4D CT Lung Image Using Stick Tensor Voting
NASA Astrophysics Data System (ADS)
Cong, Tan; Hao, Yang; Jingli, Shi; Xuan, Yang
2016-12-01
Vessel enhancement and segmentation plays a significant role in medical image analysis. This paper proposes a novel vessel enhancement and segmentation method for 4D CT lung image using stick tensor voting algorithm, which focuses on addressing the vessel distortion issue of vessel enhancement diffusion (VED) method. Furthermore, the enhanced results are easily segmented using level-set segmentation. In our method, firstly, vessels are filtered using Frangi's filter to reduce intrapulmonary noises and extract rough blood vessels. Secondly, stick tensor voting algorithm is employed to estimate the correct direction along the vessel. Then the estimated direction along the vessel is used as the anisotropic diffusion direction of vessel in VED algorithm, which makes the intensity diffusion of points locating at the vessel wall be consistent with the directions of vessels and enhance the tubular features of vessels. Finally, vessels can be extracted from the enhanced image by applying level-set segmentation method. A number of experiments results show that our method outperforms traditional VED method in vessel enhancement and results in satisfied segmented vessels.
Transient analysis using conical shell elements
NASA Technical Reports Server (NTRS)
Yang, J. C. S.; Goeller, J. E.; Messick, W. T.
1973-01-01
The use of the NASTRAN conical shell element in static, eigenvalue, and direct transient analyses is demonstrated. The results of a NASTRAN static solution of an externally pressurized ring-stiffened cylinder agree well with a theoretical discontinuity analysis. Good agreement is also obtained between the NASTRAN direct transient response of a uniform cylinder to a dynamic end load and one-dimensional solutions obtained using a method of characteristics stress wave code and a standing wave solution. Finally, a NASTRAN eigenvalue analysis is performed on a hydroballistic model idealized with conical shell elements.
Zhao, Ming; Huang, Run; Peng, Leilei
2012-11-19
Förster resonant energy transfer (FRET) is extensively used to probe macromolecular interactions and conformation changes. The established FRET lifetime analysis method measures the FRET process through its effect on the donor lifetime. In this paper we present a method that directly probes the time-resolved FRET signal with frequency domain Fourier lifetime excitation-emission matrix (FLEEM) measurements. FLEEM separates fluorescent signals by their different phonon energy pathways from excitation to emission. The FRET process generates a unique signal channel that is initiated by donor excitation but ends with acceptor emission. Time-resolved analysis of the FRET EEM channel allows direct measurements on the FRET process, unaffected by free fluorophores that might be present in the sample. Together with time-resolved analysis on non-FRET channels, i.e. donor and acceptor EEM channels, time resolved EEM analysis allows precise quantification of FRET in the presence of free fluorophores. The method is extended to three-color FRET processes, where quantification with traditional methods remains challenging because of the significantly increased complexity in the three-way FRET interactions. We demonstrate the time-resolved EEM analysis method with quantification of three-color FRET in incompletely hybridized triple-labeled DNA oligonucleotides. Quantitative measurements of the three-color FRET process in triple-labeled dsDNA are obtained in the presence of free single-labeled ssDNA and double-labeled dsDNA. The results establish a quantification method for studying multi-color FRET between multiple macromolecules in biochemical equilibrium.
Zhao, Ming; Huang, Run; Peng, Leilei
2012-01-01
Förster resonant energy transfer (FRET) is extensively used to probe macromolecular interactions and conformation changes. The established FRET lifetime analysis method measures the FRET process through its effect on the donor lifetime. In this paper we present a method that directly probes the time-resolved FRET signal with frequency domain Fourier lifetime excitation-emission matrix (FLEEM) measurements. FLEEM separates fluorescent signals by their different phonon energy pathways from excitation to emission. The FRET process generates a unique signal channel that is initiated by donor excitation but ends with acceptor emission. Time-resolved analysis of the FRET EEM channel allows direct measurements on the FRET process, unaffected by free fluorophores that might be present in the sample. Together with time-resolved analysis on non-FRET channels, i.e. donor and acceptor EEM channels, time resolved EEM analysis allows precise quantification of FRET in the presence of free fluorophores. The method is extended to three-color FRET processes, where quantification with traditional methods remains challenging because of the significantly increased complexity in the three-way FRET interactions. We demonstrate the time-resolved EEM analysis method with quantification of three-color FRET in incompletely hybridized triple-labeled DNA oligonucleotides. Quantitative measurements of the three-color FRET process in triple-labeled dsDNA are obtained in the presence of free single-labeled ssDNA and double-labeled dsDNA. The results establish a quantification method for studying multi-color FRET between multiple macromolecules in biochemical equilibrium. PMID:23187535
Calculation methods study on hot spot stress of new girder structure detail
NASA Astrophysics Data System (ADS)
Liao, Ping; Zhao, Renda; Jia, Yi; Wei, Xing
2017-10-01
To study modeling calculation methods of new girder structure detail's hot spot stress, based on surface extrapolation method among hot spot stress method, a few finite element analysis models of this welded detail were established by finite element software ANSYS. The influence of element type, mesh density, different local modeling methods of the weld toe and extrapolation methods was analyzed on hot spot stress calculation results at the toe of welds. The results show that the difference of the normal stress in the thickness direction and the surface direction among different models is larger when the distance from the weld toe is smaller. When the distance from the toe is greater than 0.5t, the normal stress of solid models, shell models with welds and non-weld shell models tends to be consistent along the surface direction. Therefore, it is recommended that the extrapolated point should be selected outside the 0.5t for new girder welded detail. According to the results of the calculation and analysis, shell models have good grid stability, and extrapolated hot spot stress of solid models is smaller than that of shell models. So it is suggested that formula 2 and solid45 should be carried out during the hot spot stress extrapolation calculation of this welded detail. For each finite element model under different shell modeling methods, the results calculated by formula 2 are smaller than those of the other two methods, and the results of shell models with welds are the largest. Under the same local mesh density, the extrapolated hot spot stress decreases gradually with the increase of the number of layers in the thickness direction of the main plate, and the variation range is within 7.5%.
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
40 CFR 412.2 - General definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... also cites the approved methods of analysis. (d) Process wastewater means water directly or indirectly... facilities; direct contact swimming, washing, or spray cooling of animals; or dust control. Process... process wastewater from the production area is or may be applied. (f) New source is defined at 40 CFR 122...
40 CFR 412.2 - General definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... also cites the approved methods of analysis. (d) Process wastewater means water directly or indirectly... facilities; direct contact swimming, washing, or spray cooling of animals; or dust control. Process... process wastewater from the production area is or may be applied. (f) New source is defined at 40 CFR 122...
40 CFR 412.2 - General definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... also cites the approved methods of analysis. (d) Process wastewater means water directly or indirectly... facilities; direct contact swimming, washing, or spray cooling of animals; or dust control. Process... process wastewater from the production area is or may be applied. (f) New source is defined at 40 CFR 122...
Direct Allocation Costing: Informed Management Decisions in a Changing Environment.
ERIC Educational Resources Information Center
Mancini, Cesidio G.; Goeres, Ernest R.
1995-01-01
It is argued that colleges and universities can use direct allocation costing to provide quantitative information needed for decision making. This method of analysis requires institutions to modify traditional ideas of costing, looking to the private sector for examples of accurate costing techniques. (MSE)
Computerized method to compensate for breathing body motion in dynamic chest radiographs
NASA Astrophysics Data System (ADS)
Matsuda, H.; Tanaka, R.; Sanada, S.
2017-03-01
Dynamic chest radiography combined with computer analysis allows quantitative analyses on pulmonary function and rib motion. The accuracy of kinematic analysis is directly linked to diagnostic accuracy, and thus body motion compensation is a major concern. Our purpose in this study was to develop a computerized method to reduce a breathing body motion in dynamic chest radiographs. Dynamic chest radiographs of 56 patients were obtained using a dynamic flat-panel detector. The images were divided into a 1 cm-square and the squares on body counter were used to detect the body motion. Velocity vector was measured using cross-correlation method on the body counter and the body motion was then determined on the basis of the summation of motion vector. The body motion was then compensated by shifting the images based on the measured vector. By using our method, the body motion was accurately detected by the order of a few pixels in clinical cases, mean 82.5% in right and left directions. In addition, our method detected slight body motion which was not able to be identified by human observations. We confirmed our method effectively worked in kinetic analysis of rib motion. The present method would be useful for the reduction of a breathing body motion in dynamic chest radiography.
Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory
NASA Astrophysics Data System (ADS)
Deyi, Feng; Ichikawa, M.
1989-11-01
In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.
[Death rate by malnutrition in children under the age of five, Colombia].
Quiroga, Edwin Fernando
2012-01-01
Much higher mortalities occur in children under five in developing countries with high poverty rates compared with developed countries. Causes of death are related to perinatal conditions, measles, HIV/AIDS, diarrhea, respiratory diseases and others. Throughout the world, malnutrition has been identified as the underlying cause of approximately half of these deaths. Death rate due to malnutrition was described using an adjusted method that takes into account the difficulties of identifying malnutrition as a direct cause of death. A descriptive study included analysis of the International Classification of Diseases (ICD-10) vital statistics from 2003-2007. Death rates were estimated, a method of analysis of multiple causes was applied for infectious diseases, along with calculations of death probabilities. Malnutrition was associated with infectious diseases. The frequency of infectious disease as a direct cause of death was almost seven times higher in cases with the antecedent of malnutrition. When adjusted death rate values were used, the initial value increased nearly five times. The probability of death after the adjustment for inadequate classification increased approximately four times. The Analysis of Multiple Causes Method was established as an effective method in analyzing malnutrition and infectious diesease mortality in Colombia. Malnutrition may be a direct underlying cause of death in one of eight deaths in children <1 year old and one of three deaths in 1-4-year-olds.
Biniarz, Piotr; Łukaszewicz, Marcin
2017-06-01
The rapid and accurate quantification of biosurfactants in biological samples is challenging. In contrast to the orcinol method for rhamnolipids, no simple biochemical method is available for the rapid quantification of lipopeptides. Various liquid chromatography (LC) methods are promising tools for relatively fast and exact quantification of lipopeptides. Here, we report strategies for the quantification of the lipopeptides pseudofactin and surfactin in bacterial cultures using different high- (HPLC) and ultra-performance liquid chromatography (UPLC) systems. We tested three strategies for sample pretreatment prior to LC analysis. In direct analysis (DA), bacterial cultures were injected directly and analyzed via LC. As a modification, we diluted the samples with methanol and detected an increase in lipopeptide recovery in the presence of methanol. Therefore, we suggest this simple modification as a tool for increasing the accuracy of LC methods. We also tested freeze-drying followed by solvent extraction (FDSE) as an alternative for the analysis of "heavy" samples. In FDSE, the bacterial cultures were freeze-dried, and the resulting powder was extracted with different solvents. Then, the organic extracts were analyzed via LC. Here, we determined the influence of the extracting solvent on lipopeptide recovery. HPLC methods allowed us to quantify pseudofactin and surfactin with run times of 15 and 20 min per sample, respectively, whereas UPLC quantification was as fast as 4 and 5.5 min per sample, respectively. Our methods provide highly accurate measurements and high recovery levels for lipopeptides. At the same time, UPLC-MS provides the possibility to identify lipopeptides and their structural isoforms.
Masud, Mohammad Shahed; Borisyuk, Roman; Stuart, Liz
2017-07-15
This study analyses multiple spike trains (MST) data, defines its functional connectivity and subsequently visualises an accurate diagram of connections. This is a challenging problem. For example, it is difficult to distinguish the common input and the direct functional connection of two spike trains. The new method presented in this paper is based on the traditional pairwise cross-correlation function (CCF) and a new combination of statistical techniques. First, the CCF is used to create the Advanced Correlation Grid (ACG) correlation where both the significant peak of the CCF and the corresponding time delay are used for detailed analysis of connectivity. Second, these two features of functional connectivity are used to classify connections. Finally, the visualization technique is used to represent the topology of functional connections. Examples are presented in the paper to demonstrate the new Advanced Correlation Grid method and to show how it enables discrimination between (i) influence from one spike train to another through an intermediate spike train and (ii) influence from one common spike train to another pair of analysed spike trains. The ACG method enables scientists to automatically distinguish between direct connections from spurious connections such as common source connection and indirect connection whereas existing methods require in-depth analysis to identify such connections. The ACG is a new and effective method for studying functional connectivity of multiple spike trains. This method can identify accurately all the direct connections and can distinguish common source and indirect connections automatically. Copyright © 2017 Elsevier B.V. All rights reserved.
Probabilistic boundary element method
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Raveendra, S. T.
1989-01-01
The purpose of the Probabilistic Structural Analysis Method (PSAM) project is to develop structural analysis capabilities for the design analysis of advanced space propulsion system hardware. The boundary element method (BEM) is used as the basis of the Probabilistic Advanced Analysis Methods (PADAM) which is discussed. The probabilistic BEM code (PBEM) is used to obtain the structural response and sensitivity results to a set of random variables. As such, PBEM performs analogous to other structural analysis codes such as finite elements in the PSAM system. For linear problems, unlike the finite element method (FEM), the BEM governing equations are written at the boundary of the body only, thus, the method eliminates the need to model the volume of the body. However, for general body force problems, a direct condensation of the governing equations to the boundary of the body is not possible and therefore volume modeling is generally required.
Causal inference in nonlinear systems: Granger causality versus time-delayed mutual information
NASA Astrophysics Data System (ADS)
Li, Songting; Xiao, Yanyang; Zhou, Douglas; Cai, David
2018-05-01
The Granger causality (GC) analysis has been extensively applied to infer causal interactions in dynamical systems arising from economy and finance, physics, bioinformatics, neuroscience, social science, and many other fields. In the presence of potential nonlinearity in these systems, the validity of the GC analysis in general is questionable. To illustrate this, here we first construct minimal nonlinear systems and show that the GC analysis fails to infer causal relations in these systems—it gives rise to all types of incorrect causal directions. In contrast, we show that the time-delayed mutual information (TDMI) analysis is able to successfully identify the direction of interactions underlying these nonlinear systems. We then apply both methods to neuroscience data collected from experiments and demonstrate that the TDMI analysis but not the GC analysis can identify the direction of interactions among neuronal signals. Our work exemplifies inference hazards in the GC analysis in nonlinear systems and suggests that the TDMI analysis can be an appropriate tool in such a case.
Direct PCR amplification of forensic touch and other challenging DNA samples: A review.
Cavanaugh, Sarah E; Bathrick, Abigail S
2018-01-01
DNA evidence sample processing typically involves DNA extraction, quantification, and STR amplification; however, DNA loss can occur at both the DNA extraction and quantification steps, which is not ideal for forensic evidence containing low levels of DNA. Direct PCR amplification of forensic unknown samples has been suggested as a means to circumvent extraction and quantification, thereby retaining the DNA typically lost during those procedures. Direct PCR amplification is a method in which a sample is added directly to an amplification reaction without being subjected to prior DNA extraction, purification, or quantification. It allows for maximum quantities of DNA to be targeted, minimizes opportunities for error and contamination, and reduces the time and monetary resources required to process samples, although data analysis may take longer as the increased DNA detection sensitivity of direct PCR may lead to more instances of complex mixtures. ISO 17025 accredited laboratories have successfully implemented direct PCR for limited purposes (e.g., high-throughput databanking analysis), and recent studies indicate that direct PCR can be an effective method for processing low-yield evidence samples. Despite its benefits, direct PCR has yet to be widely implemented across laboratories for the processing of evidentiary items. While forensic DNA laboratories are always interested in new methods that will maximize the quantity and quality of genetic information obtained from evidentiary items, there is often a lag between the advent of useful methodologies and their integration into laboratories. Delayed implementation of direct PCR of evidentiary items can be attributed to a variety of factors, including regulatory guidelines that prevent laboratories from omitting the quantification step when processing forensic unknown samples, as is the case in the United States, and, more broadly, a reluctance to validate a technique that is not widely used for evidence samples. The advantages of direct PCR of forensic evidentiary samples justify a re-examination of the factors that have delayed widespread implementation of this method and of the evidence supporting its use. In this review, the current and potential future uses of direct PCR in forensic DNA laboratories are summarized. Copyright © 2017 Elsevier B.V. All rights reserved.
McIlhenny, Ethan H; Pipkin, Kelly E; Standish, Leanna J; Wechkin, Hope A; Strassman, Rick; Barker, Steven A
2009-12-18
A direct injection/liquid chromatography-electrospray ionization-tandem mass spectrometry procedure has been developed for the simultaneous quantitation of 11 compounds potentially found in the increasingly popular Amazonian botanical medicine and religious sacrament ayahuasca. The method utilizes a deuterated internal standard for quantitation and affords rapid detection of the alkaloids by a simple dilution assay, requiring no extraction procedures. Further, the method demonstrates a high degree of specificity for the compounds in question, as well as low limits of detection and quantitation despite using samples for analysis that had been diluted up to 200:1. This approach also appears to eliminate potential matrix effects. Method bias for each compound, examined over a range of concentrations, was also determined as was inter- and intra-assay variation. Its application to the analysis of three different ayahuasca preparations is also described. This method should prove useful in the study of ayahuasca in clinical and ethnobotanical research as well as in forensic examinations of ayahuasca preparations.
Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong
2014-01-01
We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148
Geometry aware Stationary Subspace Analysis
2016-11-22
approach to handling non-stationarity is to remove or minimize it before attempting to analyze the data. In the context of brain computer interface ( BCI ...context of brain computer interface ( BCI ) data analysis, two such note-worthy methods are stationary subspace analysis (SSA) (von Bünau et al., 2009a... BCI systems, is sCSP. Its goal is to project the data onto a subspace in which the various data classes are more separable. The sCSP method directs
Song, Yuqiao; Liao, Jie; Dong, Junxing; Chen, Li
2015-09-01
The seeds of grapevine (Vitis vinifera) are a byproduct of wine production. To examine the potential value of grape seeds, grape seeds from seven sources were subjected to fingerprinting using direct analysis in real time coupled with time-of-flight mass spectrometry combined with chemometrics. Firstly, we listed all reported components (56 components) from grape seeds and calculated the precise m/z values of the deprotonated ions [M-H](-) . Secondly, the experimental conditions were systematically optimized based on the peak areas of total ion chromatograms of the samples. Thirdly, the seven grape seed samples were examined using the optimized method. Information about 20 grape seed components was utilized to represent characteristic fingerprints. Finally, hierarchical clustering analysis and principal component analysis were performed to analyze the data. Grape seeds from seven different sources were classified into two clusters; hierarchical clustering analysis and principal component analysis yielded similar results. The results of this study lay the foundation for appropriate utilization and exploitation of grape seed samples. Due to the absence of complicated sample preparation methods and chromatographic separation, the method developed in this study represents one of the simplest and least time-consuming methods for grape seed fingerprinting. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Uchidate, M.
2018-09-01
In this study, with the aim of establishing a systematic knowledge on the impact of summit extraction methods and stochastic model selection in rough contact analysis, the contact area ratio (A r /A a ) obtained by statistical contact models with different summit extraction methods was compared with a direct simulation using the boundary element method (BEM). Fifty areal topography datasets with different autocorrelation functions in terms of the power index and correlation length were used for investigation. The non-causal 2D auto-regressive model which can generate datasets with specified parameters was employed in this research. Three summit extraction methods, Nayak’s theory, 8-point analysis and watershed segmentation, were examined. With regard to the stochastic model, Bhushan’s model and BGT (Bush-Gibson-Thomas) model were applied. The values of A r /A a from the stochastic models tended to be smaller than BEM. The discrepancy between the Bhushan’s model with the 8-point analysis and BEM was slightly smaller than Nayak’s theory. The results with the watershed segmentation was similar to those with the 8-point analysis. The impact of the Wolf pruning on the discrepancy between the stochastic analysis and BEM was not very clear. In case of the BGT model which employs surface gradients, good quantitative agreement against BEM was obtained when the Nayak’s bandwidth parameter was large.
Vosough, Maryam; Rashvand, Masoumeh; Esfahani, Hadi M; Kargosha, Kazem; Salemi, Amir
2015-04-01
In this work, a rapid HPLC-DAD method has been developed for the analysis of six antibiotics (amoxicillin, metronidazole, sulfamethoxazole, ofloxacine, sulfadiazine and sulfamerazine) in the sewage treatment plant influent and effluent samples. Decreasing the chromatographic run time to less than 4 min as well as lowering the cost per analysis, were achieved through direct injection of the samples into the HPLC system followed by chemometric analysis. The problem of the complete separation of the analytes from each other and/or from the matrix ingredients was resolved as a posteriori. The performance of MCR/ALS and U-PLS/RBL, as second-order algorithms, was studied and comparable results were obtained from implication of these modeling methods. It was demonstrated that the proposed methods could be used promisingly as green analytical strategies for detection and quantification of the targeted pollutants in wastewater samples while avoiding the more complicated high cost instrumentations. Copyright © 2014 Elsevier B.V. All rights reserved.
EPA’s Selected Analytical Methods for Environmental Remediation and Recovery (SAM) lists this method for preparation and analysis of drinking water samples to detect and measure acephate, diisopropyl methylphosphonate (DIMP), methamidophos and thiofanox.
Parameter Accuracy in Meta-Analyses of Factor Structures
ERIC Educational Resources Information Center
Gnambs, Timo; Staufenbiel, Thomas
2016-01-01
Two new methods for the meta-analysis of factor loadings are introduced and evaluated by Monte Carlo simulations. The direct method pools each factor loading individually, whereas the indirect method synthesizes correlation matrices reproduced from factor loadings. The results of the two simulations demonstrated that the accuracy of…
USDA-ARS?s Scientific Manuscript database
A new chemometric method based on absorbance ratios from Fourier transform infrared spectra was devised to analyze multicomponent biodegradable plastics. The method uses the BeerLambert law to directly compute individual component concentrations and weight losses before and after biodegradation of c...
Principal Component Analysis for pulse-shape discrimination of scintillation radiation detectors
NASA Astrophysics Data System (ADS)
Alharbi, T.
2016-01-01
In this paper, we report on the application of Principal Component analysis (PCA) for pulse-shape discrimination (PSD) of scintillation radiation detectors. The details of the method are described and the performance of the method is experimentally examined by discriminating between neutrons and gamma-rays with a liquid scintillation detector in a mixed radiation field. The performance of the method is also compared against that of the conventional charge-comparison method, demonstrating the superior performance of the method particularly at low light output range. PCA analysis has the important advantage of automatic extraction of the pulse-shape characteristics which makes the PSD method directly applicable to various scintillation detectors without the need for the adjustment of a PSD parameter.
Nie, Honggang; Li, Xianjiang; Hua, Zhendong; Pan, Wei; Bai, Yanping; Fu, Xiaofang
2016-08-01
With the amounts and types of new psychoactive substances (NPSs) increasing rapidly in recent years, an excellent high-throughput method for the analysis of these compounds is urgently needed. In this article, a rapid screening method and a quantitative analysis method for 11 NPSs are described and compared, respectively. A simple direct analysis in real time mass spectrometry (DART-MS) method was developed for the analysis of 11 NPSs including three categories of these substances present on the global market such as four cathinones, one phenylethylamine, and six synthetic cannabinoids. In order to analyze these compounds quantitatively with better accuracy and sensitivity, another rapid analytical method with a low limit of detection (LOD) was also developed using liquid chromatography/electrospray ionization quadrupole time-of-flight mass spectrometry (LC/QTOFMS). The 11 NPSs could be determined within 0.5 min by DART-MS. Furthermore, they could also be separated and determined within 5 min by the LC/QTOFMS method. The two methods both showed good linearity with correlation coefficients (r(2) ) higher than 0.99. The LODs for all these target NPSs by DART-MS and LC/QTOFMS ranged from 5 to 40 ng mL(-1) and 0.1 to 1 ng mL(-1) , respectively. Confiscated samples, named as "music vanilla" and "bath salt", and 11 spiked samples were firstly screened by DART-MS and then determined by LC/QTOFMS. The identification of NPSs in confiscated materials was successfully achieved, and the proposed analytical methodology could offer rapid screening and accurate analysis results. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Miniaturized and direct spectrophotometric multi-sample analysis of trace metals in natural waters.
Albendín, Gemma; López-López, José A; Pinto, Juan J
2016-03-15
Trends in the analysis of trace metals in natural waters are mainly based on the development of sample treatment methods to isolate and pre-concentrate the metal from the matrix in a simpler extract for further instrumental analysis. However, direct analysis is often possible using more accessible techniques such as spectrophotometry. In this case a proper ligand is required to form a complex that absorbs radiation in the ultraviolet-visible (UV-Vis) spectrum. In this sense, the hydrazone derivative, di-2-pyridylketone benzoylhydrazone (dPKBH), forms complexes with copper (Cu) and vanadium (V) that absorb light at 370 and 395 nm, respectively. Although spectrophotometric methods are considered as time- and reagent-consuming, this work focused on its miniaturization by reducing the volume of sample as well as time and cost of analysis. In both methods, a micro-amount of sample is placed into a microplate reader with a capacity for 96 samples, which can be analyzed in times ranging from 5 to 10 min. The proposed methods have been optimized using a Box-Behnken design of experiments. For Cu determination, concentration of phosphate buffer solution at pH 8.33, masking agents (ammonium fluoride and sodium citrate), and dPKBH were optimized. For V analysis, sample (pH 4.5) was obtained using acetic acid/sodium acetate buffer, and masking agents were ammonium fluoride and 1,2-cyclohexanediaminetetraacetic acid. Under optimal conditions, both methods were applied to the analysis of certified reference materials TMDA-62 (lake water), LGC-6016 (estuarine water), and LGC-6019 (river water). In all cases, results proved the accuracy of the method. Copyright © 2015 Elsevier Inc. All rights reserved.
LES, DNS and RANS for the analysis of high-speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, Peyman
1994-01-01
The objective of this research is to continue our efforts in advancing the state of knowledge in Large Eddy Simulation (LES), Direct Numerical Simulation (DNS), and Reynolds Averaged Navier Stokes (RANS) methods for the analysis of high-speed reacting turbulent flows. In the first phase of this research, conducted within the past six months, focus was in three directions: RANS of turbulent reacting flows by Probability Density Function (PDF) methods, RANS of non-reacting turbulent flows by advanced turbulence closures, and LES of mixing dominated reacting flows by a dynamics subgrid closure. A summary of our efforts within the past six months of this research is provided in this semi-annual progress report.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.
Flux-gate magnetometer spin axis offset calibration using the electron drift instrument
NASA Astrophysics Data System (ADS)
Plaschke, Ferdinand; Nakamura, Rumi; Leinweber, Hannes K.; Chutter, Mark; Vaith, Hans; Baumjohann, Wolfgang; Steller, Manfred; Magnes, Werner
2014-10-01
Spin-stabilization of spacecraft immensely supports the in-flight calibration of on-board flux-gate magnetometers (FGMs). From 12 calibration parameters in total, 8 can be easily obtained by spectral analysis. From the remaining 4, the spin axis offset is known to be particularly variable. It is usually determined by analysis of Alfvénic fluctuations that are embedded in the solar wind. In the absence of solar wind observations, the spin axis offset may be obtained by comparison of FGM and electron drift instrument (EDI) measurements. The aim of our study is to develop methods that are readily usable for routine FGM spin axis offset calibration with EDI. This paper represents a major step forward in this direction. We improve an existing method to determine FGM spin axis offsets from EDI time-of-flight measurements by providing it with a comprehensive error analysis. In addition, we introduce a new, complementary method that uses EDI beam direction data instead of time-of-flight data. Using Cluster data, we show that both methods yield similarly accurate results, which are comparable yet more stable than those from a commonly used solar wind-based method.
NASA Astrophysics Data System (ADS)
Vasuki, Yathunanthan; Holden, Eun-Jung; Kovesi, Peter; Micklethwaite, Steven
2014-08-01
Recent advances in data acquisition technologies, such as Unmanned Aerial Vehicles (UAVs), have led to a growing interest in capturing high-resolution rock surface images. However, due to the large volumes of data that can be captured in a short flight, efficient analysis of this data brings new challenges, especially the time it takes to digitise maps and extract orientation data. We outline a semi-automated method that allows efficient mapping of geological faults using photogrammetric data of rock surfaces, which was generated from aerial photographs collected by a UAV. Our method harnesses advanced automated image analysis techniques and human data interaction to rapidly map structures and then calculate their dip and dip directions. Geological structures (faults, joints and fractures) are first detected from the primary photographic dataset and the equivalent three dimensional (3D) structures are then identified within a 3D surface model generated by structure from motion (SfM). From this information the location, dip and dip direction of the geological structures are calculated. A structure map generated by our semi-automated method obtained a recall rate of 79.8% when compared against a fault map produced using expert manual digitising and interpretation methods. The semi-automated structure map was produced in 10 min whereas the manual method took approximately 7 h. In addition, the dip and dip direction calculation, using our automated method, shows a mean±standard error of 1.9°±2.2° and 4.4°±2.6° respectively with field measurements. This shows the potential of using our semi-automated method for accurate and efficient mapping of geological structures, particularly from remote, inaccessible or hazardous sites.
Baek, Soo Kyoung; Lee, Seung Seok; Park, Eun Jeon; Sohn, Dong Hwan; Lee, Hye Suk
2003-02-05
A rapid and sensitive column-switching semi-micro high-performance liquid chromatography method was developed for the direct analysis of tiropramide in human plasma. The plasma sample (100 microl) was directly injected onto Capcell Pak MF Ph-1 precolumn where deproteinization and analyte fractionation occurred. Tiropramide was then eluted into an enrichment column (Capcell Pak UG C(18)) using acetonitrile-potassium phosphate (pH 7.0, 50 mM) (12:88, v/v) and was analyzed on a semi-micro C(18) analytical column using acetonitrile-potassium phosphate (pH 7.0, 10 mM) (50:50, v/v). The method showed excellent sensitivity (limit of quantification 5 ng/ml), and good precision (C.V.
Ion-beam apparatus and method for analyzing and controlling integrated circuits
Campbell, A.N.; Soden, J.M.
1998-12-01
An ion-beam apparatus and method for analyzing and controlling integrated circuits are disclosed. The ion-beam apparatus comprises a stage for holding one or more integrated circuits (ICs); a source means for producing a focused ion beam; and a beam-directing means for directing the focused ion beam to irradiate a predetermined portion of the IC for sufficient time to provide an ion-beam-generated electrical input signal to a predetermined element of the IC. The apparatus and method have applications to failure analysis and developmental analysis of ICs and permit an alteration, control, or programming of logic states or device parameters within the IC either separate from or in combination with applied electrical stimulus to the IC for analysis thereof. Preferred embodiments of the present invention including a secondary particle detector and an electron floodgun further permit imaging of the IC by secondary ions or electrons, and allow at least a partial removal or erasure of the ion-beam-generated electrical input signal. 4 figs.
Ion-beam apparatus and method for analyzing and controlling integrated circuits
Campbell, Ann N.; Soden, Jerry M.
1998-01-01
An ion-beam apparatus and method for analyzing and controlling integrated circuits. The ion-beam apparatus comprises a stage for holding one or more integrated circuits (ICs); a source means for producing a focused ion beam; and a beam-directing means for directing the focused ion beam to irradiate a predetermined portion of the IC for sufficient time to provide an ion-beam-generated electrical input signal to a predetermined element of the IC. The apparatus and method have applications to failure analysis and developmental analysis of ICs and permit an alteration, control, or programming of logic states or device parameters within the IC either separate from or in combination with applied electrical stimulus to the IC for analysis thereof. Preferred embodiments of the present invention including a secondary particle detector and an electron floodgun further permit imaging of the IC by secondary ions or electrons, and allow at least a partial removal or erasure of the ion-beam-generated electrical input signal.
Photothermal method of determining calorific properties of coal
Amer, N.M.
1983-05-16
Predetermined amounts of heat are generated within a coal sample by directing pump light pulses of predetermined energy content into a small surface region of the sample. A beam of probe light is directed along the sample surface and deflection of the probe beam from thermally induced changes of index of refraction in the fluid medium adjacent the heated region are detected. Deflection amplitude and the phase lag of the deflection, relative to the initiating pump light pulse, are indicative of the calorific value and the porosity of the sample. The method provides rapid, accurate and nondestructive analysis of the heat producing capabilities of coal samples. In the preferred form, sequences of pump light pulses of increasing durations are directed into the sample at each of a series of minute regions situated along a raster scan path enabling detailed analysis of variations of thermal properties at different areas of the sample and at different depths.
Load Weight Classification of The Quayside Container Crane Based On K-Means Clustering Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Bingqian; Hu, Xiong; Tang, Gang; Wang, Yide
2017-07-01
The precise knowledge of the load weight of each operation of the quayside container crane is important for accurately assessing the service life of the crane. The load weight is directly related to the vibration intensity. Through the study on the vibration of the hoist motor of the crane in radial and axial directions, we can classify the load using K-means clustering algorithm and quantitative statistical analysis. Vibration in radial direction is significantly and positively correlated with that in axial direction by correlation analysis, which means that we can use the data only in one of the directions to carry out the study improving then the efficiency without degrading the accuracy of load classification. The proposed method can well represent the real-time working condition of the crane.
Barker, Steven A; Borjigin, Jimo; Lomnicka, Izabela; Strassman, Rick
2013-12-01
We report a qualitative liquid chromatography-tandem mass spectrometry (LC/MS/MS) method for the simultaneous analysis of the three known N,N-dimethyltryptamine endogenous hallucinogens, their precursors and metabolites, as well as melatonin and its metabolic precursors. The method was characterized using artificial cerebrospinal fluid (aCSF) as the matrix and was subsequently applied to the analysis of rat brain pineal gland-aCSF microdialysate. The method describes the simultaneous analysis of 23 chemically diverse compounds plus a deuterated internal standard by direct injection, requiring no dilution or extraction of the samples. The results demonstrate that this is a simple, sensitive, specific and direct approach to the qualitative analysis of these compounds in this matrix. The protocol also employs stringent MS confirmatory criteria for the detection and confirmation of the compounds examined, including exact mass measurements. The excellent limits of detection and broad scope make it a valuable research tool for examining the endogenous hallucinogen pathways in the central nervous system. We report here, for the first time, the presence of N,N-dimethyltryptamine in pineal gland microdialysate obtained from the rat. Copyright © 2013 John Wiley & Sons, Ltd.
Evaluation of the quality of herbal teas by DART/TOF-MS.
Prchalová, J; Kovařík, F; Rajchl, A
2017-02-01
The paper focuses on the optimization, settings and validation of direct analysis in real time coupled with time-of-flight detector when used for the evaluation of the quality of selected herbal teas (fennel, chamomile, nettle, linden, peppermint, thyme, lemon balm, marigold, sage, rose hip and St. John's wort). The ionization mode, the optimal ionization temperature and the type of solvent for sample extraction were optimized. The characteristic compounds of the analysed herbal teas (glycosides, flavonoids and phenolic and terpenic substances, such as chamazulene, anethole, menthol, thymol, salviol and hypericin) were detected. The obtained mass spectra were evaluated by multidimensional chemometric methods, such as cluster analysis, linear discriminate analysis and principal component analysis. The chemometric methods showed that the single variety herbal teas were grouped according to their taxonomic affiliation. The developed method is suitable for quick identification of herbs and can be potentially used for assessing the quality and authenticity of herbal teas. Direct analysis in real time/time-of-flight-MS is also suitable for the evaluation of selected substances contained in the mentioned herbs and herbal products. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Irei, Satoshi
2016-01-01
Molecular marker analysis of environmental samples often requires time consuming preseparation steps. Here, analysis of low-volatile nonpolar molecular markers (5-6 ring polycyclic aromatic hydrocarbons or PAHs, hopanoids, and n-alkanes) without the preseparation procedure is presented. Analysis of artificial sample extracts was directly conducted by gas chromatography-mass spectrometry (GC-MS). After every sample injection, a standard mixture was also analyzed to make a correction on the variation of instrumental sensitivity caused by the unfavorable matrix contained in the extract. The method was further validated for the PAHs using the NIST standard reference materials (SRMs) and then applied to airborne particulate matter samples. Tests with the SRMs showed that overall our methodology was validated with the uncertainty of ~30%. The measurement results of airborne particulate matter (PM) filter samples showed a strong correlation between the PAHs, implying the contributions from the same emission source. Analysis of size-segregated PM filter samples showed that their size distributions were found to be in the PM smaller than 0.4 μm aerodynamic diameter. The observations were consistent with our expectation of their possible sources. Thus, the method was found to be useful for molecular marker studies. PMID:27127511
Push-through direct injection NMR: an optimized automation method applied to metabolomics
There is a pressing need to increase the throughput of NMR analysis in fields such as metabolomics and drug discovery. Direct injection (DI) NMR automation is recognized to have the potential to meet this need due to its suitability for integration with the 96-well plate format. ...
Directional Dependence in Developmental Research
ERIC Educational Resources Information Center
von Eye, Alexander; DeShon, Richard P.
2012-01-01
In this article, we discuss and propose methods that may be of use to determine direction of dependence in non-normally distributed variables. First, it is shown that standard regression analysis is unable to distinguish between explanatory and response variables. Then, skewness and kurtosis are discussed as tools to assess deviation from…
NASA Astrophysics Data System (ADS)
Yoon, Heechul; Lee, Hyuntaek; Jung, Haekyung; Lee, Mi-Young; Won, Hye-Sung
2015-03-01
The objective of the paper is to introduce a novel method for nuchal translucency (NT) boundary detection and thickness measurement, which is one of the most significant markers in the early screening of chromosomal defects, namely Down syndrome. To improve the reliability and reproducibility of NT measurements, several automated methods have been introduced. However, the performance of their methods degrades when NT borders are tilted due to varying fetal movements. Therefore, we propose a principal direction estimation based NT measurement method to provide reliable and consistent performance regardless of both fetal positions and NT directions. At first, Radon Transform and cost function are used to estimate the principal direction of NT borders. Then, on the estimated angle bin, i.e., the main direction of NT, gradient based features are employed to find initial NT lines which are beginning points of the active contour fitting method to find real NT borders. Finally, the maximum thickness is measured from distances between the upper and lower border of NT by searching along to the orthogonal lines of main NT direction. To evaluate the performance, 89 of in vivo fetal images were collected and the ground-truth database was measured by clinical experts. Quantitative results using intraclass correlation coefficients and difference analysis verify that the proposed method can improve the reliability and reproducibility in the measurement of maximum NT thickness.
An incremental strategy for calculating consistent discrete CFD sensitivity derivatives
NASA Technical Reports Server (NTRS)
Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.
1992-01-01
In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.
NASA Astrophysics Data System (ADS)
Ying, Shen; Li, Lin; Gao, Yurong
2009-10-01
Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.
Method for high resolution magnetic resonance analysis using magic angle technique
Wind, Robert A.; Hu, Jian Zhi
2003-12-30
A method of performing a magnetic resonance analysis of a biological object that includes placing the object in a main magnetic field (that has a static field direction) and in a radio frequency field; rotating the object at a frequency of less than about 100 Hz around an axis positioned at an angle of about 54.degree.44' relative to the main magnetic static field direction; pulsing the radio frequency to provide a sequence that includes a phase-corrected magic angle turning pulse segment; and collecting data generated by the pulsed radio frequency. The object may be reoriented about the magic angle axis between three predetermined positions that are related to each other by 120.degree.. The main magnetic field may be rotated mechanically or electronically. Methods for magnetic resonance imaging of the object are also described.
Method for high resolution magnetic resonance analysis using magic angle technique
Wind, Robert A.; Hu, Jian Zhi
2004-12-28
A method of performing a magnetic resonance analysis of a biological object that includes placing the object in a main magnetic field (that has a static field direction) and in a radio frequency field; rotating the object at a frequency of less than about 100 Hz around an axis positioned at an angle of about 54.degree.44' relative to the main magnetic static field direction; pulsing the radio frequency to provide a sequence that includes a phase-corrected magic angle turning pulse segment; and collecting data generated by the pulsed radio frequency. The object may be reoriented about the magic angle axis between three predetermined positions that are related to each other by 120.degree.. The main magnetic field may be rotated mechanically or electronically. Methods for magnetic resonance imaging of the object are also described.
NASA Astrophysics Data System (ADS)
Yang, Hong-tao; Cai, Chun-mei; Fang, Chuan-zhi; Wu, Tian-feng
2013-10-01
In order to develop micro-nano probe having error self-correcting function and good rigidity structure, a new micro-nano probe system was developed based on six-dimensional micro-force measuring principle. The structure and working principle of the probe was introduced in detail. The static nonlinear decoupling method was established with BP neural network to do the static decoupling for the dimension coupling existing in each direction force measurements. The optimal parameters of BP neural network were selected and the decoupling simulation experiments were done. The maximum probe coupling rate after decoupling is 0.039% in X direction, 0.025% in Y direction and 0.027% in Z direction. The static measurement sensitivity of the probe can reach 10.76μɛ / mN in Z direction and 14.55μɛ / mN in X and Y direction. The modal analysis and harmonic response analysis under three dimensional harmonic load of the probe were done by using finite element method. The natural frequencies under different vibration modes were obtained and the working frequency of the probe was determined, which is higher than 10000 Hz . The transient response analysis of the probe was done, which indicates that the response time of the probe can reach 0.4 ms. From the above results, it is shown that the developed micro-nano probe meets triggering requirements of micro-nano probe. Three dimension measuring force can be measured precisely by the developed probe, which can be used to predict and correct the force deformation error and the touch error of the measuring ball and the measuring rod.
Tooth shape optimization of brushless permanent magnet motors for reducing torque ripples
NASA Astrophysics Data System (ADS)
Hsu, Liang-Yi; Tsai, Mi-Ching
2004-11-01
This paper presents a tooth shape optimization method based on a generic algorithm to reduce the torque ripple of brushless permanent magnet motors under two different magnetization directions. The analysis of this design method mainly focuses on magnetic saturation and cogging torque and the computation of the optimization process is based on an equivalent magnetic network circuit. The simulation results, obtained from the finite element analysis, are used to confirm the accuracy and performance. Finite element analysis results from different tooth shapes are compared to show the effectiveness of the proposed method.
Ma, Qiang; Bai, Hua; Li, Wentao; Wang, Chao; Li, Xinshi; Cooks, R Graham; Ouyang, Zheng
2016-03-17
Significantly simplified work flows were developed for rapid analysis of various types of cosmetic and foodstuff samples by employing a miniature mass spectrometry system and ambient ionization methods. A desktop Mini 12 ion trap mass spectrometer was coupled with paper spray ionization, extraction spray ionization and slug-flow microextraction for direct analysis of Sudan Reds, parabens, antibiotics, steroids, bisphenol and plasticizer from raw samples with complex matrices. Limits of detection as low as 5 μg/kg were obtained for target analytes. On-line derivatization was also implemented for analysis of steroid in cosmetics. The developed methods provide potential analytical possibility for outside-the-lab screening of cosmetics and foodstuff products for the presence of illegal substances. Copyright © 2016 Elsevier B.V. All rights reserved.
Conducting qualitative research in mental health: Thematic and content analyses.
Crowe, Marie; Inder, Maree; Porter, Richard
2015-07-01
The objective of this paper is to describe two methods of qualitative analysis - thematic analysis and content analysis - and to examine their use in a mental health context. A description of the processes of thematic analysis and content analysis is provided. These processes are then illustrated by conducting two analyses of the same qualitative data. Transcripts of qualitative interviews are analysed using each method to illustrate these processes. The illustration of the processes highlights the different outcomes from the same set of data. Thematic and content analyses are qualitative methods that serve different research purposes. Thematic analysis provides an interpretation of participants' meanings, while content analysis is a direct representation of participants' responses. These methods provide two ways of understanding meanings and experiences and provide important knowledge in a mental health context. © The Royal Australian and New Zealand College of Psychiatrists 2015.
Goldade, Mary Patricia; O'Brien, Wendy Pott
2014-01-01
At asbestos-contaminated sites, exposure assessment requires measurement of airborne asbestos concentrations; however, the choice of preparation steps employed in the analysis has been debated vigorously among members of the asbestos exposure and risk assessment communities for many years. This study finds that the choice of preparation technique used in estimating airborne amphibole asbestos exposures for risk assessment is generally not a significant source of uncertainty. Conventionally, the indirect preparation method has been less preferred by some because it is purported to result in false elevations in airborne asbestos concentrations, when compared to direct analysis of air filters. However, airborne asbestos sampling in non-occupational settings is challenging because non-asbestos particles can interfere with the asbestos measurements, sometimes necessitating analysis via indirect preparation. To evaluate whether exposure concentrations derived from direct versus indirect preparation techniques differed significantly, paired measurements of airborne Libby-type amphibole, prepared using both techniques, were compared. For the evaluation, 31 paired direct and indirect preparations originating from the same air filters were analyzed for Libby-type amphibole using transmission electron microscopy. On average, the total Libby-type amphibole airborne exposure concentration was 3.3 times higher for indirect preparation analysis than for its paired direct preparation analysis (standard deviation = 4.1), a difference which is not statistically significant (p = 0.12, two-tailed, Wilcoxon signed rank test). The results suggest that the magnitude of the difference may be larger for shorter particles. Overall, neither preparation technique (direct or indirect) preferentially generates more precise and unbiased data for airborne Libby-type amphibole concentration estimates. The indirect preparation method is reasonable for estimating Libby-type amphibole exposure and may be necessary given the challenges of sampling in environmental settings. Relative to the larger context of uncertainties inherent in the risk assessment process, uncertainties associated with the use of airborne Libby-type amphibole exposure measurements derived from indirect preparation analysis are low. Use of exposure measurements generated by either direct or indirect preparation analyses is reasonable to estimate Libby-type Amphibole exposures in a risk assessment.
[EXPRESS IDENTIFICATION OF POSITIVE BLOOD CULTURES USING DIRECT MALDI-TOF MASS SPECTROMETRY].
Popov, D A; Ovseenko, S T; Vostrikova, T Yu
2015-01-01
To evaluate the effectiveness of direct identification of pathogens of bacteremia by direct matrix assisted laser desorption ionization time-flight mass spectrometry (mALDI-TOF) compared to routine method. A prospective study included 211 positive blood cultures obtained from 116 patients (106 adults and 10 children, aged from 2 weeks to 77 years old in the ICU after open heart surgery. Incubation was carried out under aerobic vials with a sorbent for antibiotics Analyzer BacT/ALERT 3D 120 (bioMerieux, France) in parallel with the primary sieving blood cultures on solid nutrient media with subsequent identification of pure cultures using MALDI-TOF mass spectrometry analyzer Vitek MS, bioMerieux, France routine method), after appropriate sample preparation we carried out a direct (without screening) MALDI-TOF mass spectrometric study of monocomponental blood cultures (n = 201). using a routine method in 211 positive blood cultures we identified 23 types of microorganisms (Staphylococcus (n = 87), Enterobacteria- ceae (n = 71), Enterococci (n = 20), non-fermentative Gram-negative bacteria (n = 18), others (n = 5). The average time of incubation of samples to obtain a signal of a blood culture growth was 16.2 ± 7.4 h (from 3.75 to 51 hours.) During the first 12 hours of incubation, growth was obtained in 32.4% of the samples, and on the first day in 92.2%. In the direct mass spectrometric analysis mnonocomponental blood cultures (n = 201) is well defined up to 153 species of the sample (76.1%), while the share of successful identification of Gram-negative bacteria was higher than that of Gram-positive (85.4 and 69, 1%, respectively p = 0.01). The high degree of consistency in the results of standard and direct method of identifying blood cultures using MALDI-TOF mass spectrometry (κ = 0.96, p < 0.001; the samples included in the calculation for which both option given result). Duration of the direct mass spectrometric analysis, including sample preparation, was no longer than 1 hour: The method of direct MALDI-TOF mass spectrometry allows to significantly speed up the identification of blood cultures that may contribute as much as possible early appointment effective regimes of starting antibiotic therapy.
Zahedi, Razieh; Noroozi, Alireza; Hajebi, Ahmad; Haghdoost, Ali Akbar; Baneshi, Mohammad Reza; Sharifi, Hamid; Mirzazadeh, Ali
2018-04-30
This study aimed to estimate the prevalence of substance use among university students measured by direct and indirect methods, and to calculate the visibility factor (VF) defined as ratio of indirect to direct estimates of substance use prevalence. A cross-sectional study. Using a multistage non-random sampling approach, we recruited 2157 students from three universities in Kerman, Iran, in 2016. We collected data on substance use by individual face-to-face interview using direct (i.e. self-report of their own behaviors) and indirect (NSU: Network scale up) methods. All estimates from direct and indirect methods were weighted based on inverse probability weight of sampling university. The response rate was 83.6%. The last year prevalence of water pipe, alcohol, and cigarettes indirect method was 44.6%, 18.1%, and 13.2% respectively. Corresponding figures in NSU analysis were 36.4%, 18.2%, and 16.5% respectively. In the female population, VF for all types of substance was less than male. Considerable numbers of university students used substances like a water pipe, alcohol, and cigarettes. NSU seems a promising method, especially among male students. Among female students, direct method provided more reliable results mainly due to transmission and prestige biases.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Requirements; State Agricultural Loan Mediation Programs; Right of First Refusal § 614.4510 General. Direct... for maintaining control, for the proper analysis of such data, and prompt action as needed; (ii... objectives, financing programs, organizational structure, and operating methods, and appropriate analysis of...
Lüdeke, Catharina H M; Fischer, Markus; LaFon, Patti; Cooper, Kara; Jones, Jessica L
2014-07-01
Vibrio parahaemolyticus is the leading cause of infectious illness associated with seafood consumption in the United States. Molecular fingerprinting of strains has become a valuable research tool for understanding this pathogen. However, there are many subtyping methods available and little information on how they compare to one another. For this study, a collection of 67 oyster and 77 clinical V. parahaemolyticus isolates were analyzed by three subtyping methods--intergenic spacer region (ISR-1), direct genome restriction analysis (DGREA), and pulsed-field gel electrophoresis (PFGE)--to determine the utility of these methods for discriminatory subtyping. ISR-1 analysis, run as previously described, provided the lowest discrimination of all the methods (discriminatory index [DI]=0.8665). However, using a broader analytical range than previously reported, ISR-1 clustered isolates based on origin (oyster versus clinical) and had a DI=0.9986. DGREA provided a DI=0.9993-0.9995, but did not consistently cluster the isolates by any identifiable characteristics (origin, serotype, or virulence genotype) and ∼ 15% of isolates were untypeable by this method. PFGE provided a DI=0.9998 when using the combined pattern analysis of both restriction enzymes, SfiI and NotI. This analysis was more discriminatory than using either enzyme pattern alone and primarily grouped isolates by serotype, regardless of strain origin (clinical or oyster) or presence of currently accepted virulence markers. These results indicate that PFGE and ISR-1 are more reliable methods for subtyping V. parahemolyticus, rather than DGREA. Additionally, ISR-1 may provide an indication of pathogenic potential; however, more detailed studies are needed. These data highlight the diversity within V. parahaemolyticus and the need for appropriate selection of subtyping methods depending on the study objectives.
Yamashita, Yasunobu; Ueda, Kazuki; Kawaji, Yuki; Tamura, Takashi; Itonaga, Masahiro; Yoshida, Takeichi; Maeda, Hiroki; Magari, Hirohito; Maekita, Takao; Iguchi, Mikitaka; Tamai, Hideyuki; Ichinose, Masao; Kato, Jun
2016-07-15
Transpapillary forceps biopsy is an effective diagnostic technique in patients with biliary stricture. This prospective study aimed to determine the usefulness of the wire-grasping method as a new technique for forceps biopsy. Consecutive patients with biliary stricture or irregularities of the bile duct wall were randomly allocated to either the direct or wire-grasping method group. In the wiregrasping method, forceps in the duodenum grasps a guidewire placed into the bile duct beforehand, and then, the forceps are pushed through the papilla without endoscopic sphincterotomy. In the direct method, forceps are directly pushed into the bile duct alongside a guide-wire. The primary endpoint was the success rate of obtaining specimens suitable for adequate pathological examination. In total, 32 patients were enrolled, and 28 (14 in each group) were eligible for analysis. The success rate was significantly higher using the wire-grasping method than the direct method (100% vs 50%, p=0.016). Sensitivity and accuracy for the diagnosis of cancer were comparable in patients with the successful procurement of biopsy specimens between the two methods (91% vs 83% and 93% vs 86%, respectively). The wire-grasping method is useful for diagnosing patients with biliary stricture or irregularities of the bile duct wall.
Nannemann, David P; Birmingham, William R; Scism, Robert A; Bachmann, Brian O
2011-01-01
To address the synthesis of increasingly structurally diverse small-molecule drugs, methods for the generation of efficient and selective biological catalysts are becoming increasingly important. ‘Directed evolution’ is an umbrella term referring to a variety of methods for improving or altering the function of enzymes using a nature-inspired twofold strategy of mutagenesis followed by selection. This article provides an objective assessment of the effectiveness of directed evolution campaigns in generating enzymes with improved catalytic parameters for new substrates from the last decade, excluding studies that aimed to select for only improved physical properties and those that lack kinetic characterization. An analysis of the trends of methodologies and their success rates from 81 qualifying examples in the literature reveals the average fold improvement for kcat (or Vmax), Km and kcat/Km to be 366-, 12- and 2548-fold, respectively, whereas the median fold improvements are 5.4, 3 and 15.6. Further analysis by enzyme class, library-generation methodology and screening methodology explores relationships between successful campaigns and the methodologies employed. PMID:21644826
Osera, Shozo; Yano, Tomonori; Odagaki, Tomoyuki; Oono, Yasuhiro; Ikematsu, Hiroaki; Ohtsu, Atsushi; Kaneko, Kazuhiro
2015-10-01
Percutaneous endoscopic gastrostomy (PEG) using the direct method is generally indicated for cancer patients. However, there are little available data about peritonitis related to this method. The aim of this retrospective analysis was to assess peritonitis related to PEG using the direct method in patients with cancer. We assessed the prevalence of peritonitis and the relationship between peritonitis and patients' backgrounds, as well as the clinical course after peritonitis. Between December 2008 and December 2011, peritonitis was found in 9 (2.1 %) of 421 patients. Of the 9 patients with peritonitis, 4 had received PEG prior to chemoradiotherapy. Emergency surgical drainage was required in 1 patient, and the remaining 8 recovered with conservative treatment. Peritonitis occurred within 8 days of PEG for 8 of the 9 patients and occurred within 2 days of suture removal for 4 of the 9 patients. Peritonitis related to PEG using the direct method was less frequent for cancer patients. Peritonitis tended to occur within a few days after removal of securing suture and in patients with palliative stage.
Kangani, Cyrous O.; Kelley, David E.; DeLany, James P.
2008-01-01
A simple, direct and accurate method for the determination of concentration and enrichment of free fatty acids in human plasma was developed. The validation and comparison to a conventional method are reported. Three amide derivatives, dimethyl, diethyl and pyrrolidide, were investigated in order to achieve optimal resolution of the individual fatty acids. This method involves the use of dimethylamine/Deoxo-Fluor to derivatize plasma free fatty acids to their dimethylamides. This derivatization method is very mild and efficient, and is selective only towards free fatty acids so that no separation from a total lipid extract is required. The direct method gave lower concentrations for palmitic acid and stearic acid and increased concentrations for oleic acid and linoleic acid in plasma as compared to methylester derivative after thin-layer chromatography. The [13C]palmitate isotope enrichment measured using direct method was significantly higher than that observed with the BF3/MeOH-TLC method. The present method provided accurate and precise measures of concentration as well as enrichment when analyzed with gas chromatography combustion-isotope ratio-mass spectrometry. PMID:18757250
Kangani, Cyrous O; Kelley, David E; Delany, James P
2008-09-15
A simple, direct and accurate method for the determination of concentration and enrichment of free fatty acids (FFAs) in human plasma was developed. The validation and comparison to a conventional method are reported. Three amide derivatives, dimethyl, diethyl and pyrrolidide, were investigated in order to achieve optimal resolution of the individual fatty acids. This method involves the use of dimethylamine/Deoxo-Fluor to derivatize plasma free fatty acids to their dimethylamides. This derivatization method is very mild and efficient, and is selective only towards FFAs so that no separation from a total lipid extract is required. The direct method gave lower concentrations for palmitic acid and stearic acid and increased concentrations for oleic acid and linoleic acid in plasma as compared to methyl ester derivative after thin-layer chromatography. The [(13)C]palmitate isotope enrichment measured using direct method was significantly higher than that observed with the BF(3)/MeOH-TLC method. The present method provided accurate and precise measures of concentration as well as enrichment when analyzed with gas chromatography combustion-isotope ratio-mass spectrometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.
2010-09-15
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination ofmore » the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article. (author)« less
[Development of an Excel spreadsheet for meta-analysis of indirect and mixed treatment comparisons].
Tobías, Aurelio; Catalá-López, Ferrán; Roqué, Marta
2014-01-01
Meta-analyses in clinical research usually aimed to evaluate treatment efficacy and safety in direct comparison with a unique comparator. Indirect comparisons, using the Bucher's method, can summarize primary data when information from direct comparisons is limited or nonexistent. Mixed comparisons allow combining estimates from direct and indirect comparisons, increasing statistical power. There is a need for simple applications for meta-analysis of indirect and mixed comparisons. These can easily be conducted using a Microsoft Office Excel spreadsheet. We developed a spreadsheet for indirect and mixed effects comparisons of friendly use for clinical researchers interested in systematic reviews, but non-familiarized with the use of more advanced statistical packages. The use of the proposed Excel spreadsheet for indirect and mixed comparisons can be of great use in clinical epidemiology to extend the knowledge provided by traditional meta-analysis when evidence from direct comparisons is limited or nonexistent.
Kim, Hoon; Jang, Dong-Kyu; Han, Young-Min; Sung, Jae Hoon; Park, Ik Seong; Lee, Kwan-Sung; Yang, Ji-Ho; Huh, Pil Woo; Park, Young Sup; Kim, Dal-Soo; Han, Kyung-Do
2016-10-01
It remains controversial which bypass methods are optimal for treating adult moyamoya angiopathy patients. This study aimed to analyze the literature about whether different bypass methods affect differently outcome results of adult moyamoya patients with symptoms or hemodynamic instability. A systematic search of the PubMed, Embase, and Cochrane Central databases was performed for articles published between 1990 and 2015. Comparative studies about the effect of direct or combined bypass (direct bypass group) and indirect bypass (indirect bypass group) in patients with moyamoya angiopathy at 18 years of age or older were selected. For stroke incidence at the end of the follow-up period, the degree of angiographic revascularization, hemodynamic improvement, and perioperative complication rates within 30 days, pooled relative risks were calculated between the 2 groups with a 95% confidence interval. A total of 8 articles (including 536 patients and 732 treated hemispheres) were included in the meta-analysis. There were no significant differences between the 2 groups when we compared the overall stroke rate, the hemodynamic improvement rate, or the perioperative complication rate at the end of the follow-up period. The direct bypass group, however, had a lower risk than the indirect bypass group for obtaining a poor angiographic revascularization rate (risk ratio, 0.35; 95% confidence interval, 0.15-0.84; P = 0.02). The current meta-analysis suggests that the direct or combined bypass surgical method is better for angiographic revascularization in adult moyamoya patients with symptoms or hemodynamic instability. Future studies may be necessary to confirm these findings. Copyright © 2016 Elsevier Inc. All rights reserved.
Pano-Farias, Norma S; Ceballos-Magaña, Silvia G; Muñiz-Valencia, Roberto; Jurado, Jose M; Alcázar, Ángela; Aguayo-Villarreal, Ismael A
2017-12-15
Due the negative effects of pesticides on environment and human health, more efficient and environmentally friendly methods are needed. In this sense, a simple, fast, free from memory effects and economical direct-immersion single drop micro-extraction (SDME) method and GC-MS for multi-class pesticides determination in mango samples was developed. Sample pre-treatment using ultrasound-assisted solvent extraction and factors affecting the SDME procedure (extractant solvent, drop volume, stirring rate, ionic strength, time, pH and temperature) were optimized using factorial experimental design. This method presented high sensitive (LOD: 0.14-169.20μgkg -1 ), acceptable precision (RSD: 0.7-19.1%), satisfactory recovery (69-119%) and high enrichment factors (20-722). Several obtained LOQs are below the MRLs established by the European Commission; therefore, the method could be applied for pesticides determination in routing analysis and custom laboratories. Moreover, this method has shown to be suitable for determination of some of the studied pesticides in lime, melon, papaya, banana, tomato, and lettuce. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Zhan; Chen, Lee Chuin; Mandal, Mridul Kanti; Yoshimura, Kentaro; Takeda, Sen; Hiraoka, Kenzo
2013-10-01
This study presents a novel direct analysis strategy for rapid mass spectrometric profiling of biochemicals in real-world samples via a direct sampling probe (DSP) without sample pretreatments. Chemical modification is applied to a disposable stainless steel acupuncture needle to enhance its surface area and hydrophilicity. After insertion into real-world samples, biofluid can be attached on the DSP surface. With the presence of a high DC voltage and solvent vapor condensing on the tip of the DSP, analyte can be dissolved and electrosprayed. The simplicity in design, versatility in application aspects, and other advantages such as low cost and disposability make this new method a competitive tool for direct analysis of real-world samples.
Liu, Guangxin; Wang, Pei; Li, Chan; Wang, Jing; Sun, Zhenyu; Zhao, Xinfeng; Zheng, Xiaohui
2017-07-01
Drug-protein interaction analysis is pregnant in designing new leads during drug discovery. We prepared the stationary phase containing immobilized β 2 -adrenoceptor (β 2 -AR) by linkage of the receptor on macroporous silica gel surface through N,N'-carbonyldiimidazole method. The stationary phase was applied in identifying antiasthmatic target of protopine guided by the prediction of site-directed molecular docking. Subsequent application of immobilized β 2 -AR in exploring the binding of protopine to the receptor was realized by frontal analysis and injection amount-dependent method. The association constants of protopine to β 2 -AR by the 2 methods were (1.00 ± 0.06) × 10 5 M -1 and (1.52 ± 0.14) × 10 4 M -1 . The numbers of binding sites were (1.23 ± 0.07) × 10 -7 M and (9.09 ± 0.06) × 10 -7 M, respectively. These results indicated that β 2 -AR is the specific target for therapeutic action of protopine in vivo. The target-drug binding occurred on Ser 169 in crystal structure of the receptor. Compared with frontal analysis, injection amount-dependent method is advantageous to drug saving, improvement of sampling efficiency, and performing speed. It has grave potential in high-throughput drug-receptor interaction analysis. Copyright © 2017 John Wiley & Sons, Ltd.
Item Factor Analysis: Current Approaches and Future Directions
ERIC Educational Resources Information Center
Wirth, R. J.; Edwards, Michael C.
2007-01-01
The rationale underlying factor analysis applies to continuous and categorical variables alike; however, the models and estimation methods for continuous (i.e., interval or ratio scale) data are not appropriate for item-level data that are categorical in nature. The authors provide a targeted review and synthesis of the item factor analysis (IFA)…
Developments in Cylindrical Shell Stability Analysis
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Starnes, James H., Jr.
1998-01-01
Today high-performance computing systems and new analytical and numerical techniques enable engineers to explore the use of advanced materials for shell design. This paper reviews some of the historical developments of shell buckling analysis and design. The paper concludes by identifying key research directions for reliable and robust methods development in shell stability analysis and design.
Directional pair distribution function for diffraction line profile analysis of atomistic models
Leonardi, Alberto; Leoni, Matteo; Scardi, Paolo
2013-01-01
The concept of the directional pair distribution function is proposed to describe line broadening effects in powder patterns calculated from atomistic models of nano-polycrystalline microstructures. The approach provides at the same time a description of the size effect for domains of any shape and a detailed explanation of the strain effect caused by the local atomic displacement. The latter is discussed in terms of different strain types, also accounting for strain field anisotropy and grain boundary effects. The results can in addition be directly read in terms of traditional line profile analysis, such as that based on the Warren–Averbach method. PMID:23396818
ERIC Educational Resources Information Center
Mulik, James D.; Sawicki, Eugene
1979-01-01
Accurate for the analysis of ions in solution, this form of analysis enables the analyst to directly assay many compounds that previously were difficult or impossible to analyze. The method is a combination of the methodologies of ion exchange, liquid chromatography, and conductimetric determination with eluant suppression. (Author/RE)
NASA Astrophysics Data System (ADS)
Kamińska, Anna
2010-01-01
The relationship between karst of chalk and tectonics in the interfluve of the middle Wieprz and Bug Rivers has been already examined by Maruszczak (1966), Harasimiuk (1980) and Dobrowolski (1998). Investigating the connection of the karst formation course and the substratum structure, the direction of the landforms and their spatial pattern were analysed and compared later to the structural pattern. The obvious completion of the collected data is a quantity analysis using statistical methods. This paper deals with the characteristics of such quantity analysis. By using the tools of the directional statistics, the following indexes have been calculated: the mean vector orientation, the length of the vector mean, strength of the vector mean, the Batschelet variance, as well as determined confidence intervals for the mean vector. In order to examine the distribution structure of these forms, the selected methods of the spatial statistics have been used-angular wavelet analysis (Rosenberg 2004) and the semivariogram analysis (Namysłowska-Wilczyńska 2006). On the basis of conducted analyses, it is possible to describe in detail the regularities in spatial distribution of the surface karst forms in the interfluve of the middle Wieprz and Bug Rivers. The orientation analysis reveals an important feature of their direction-along with a rise in the size of surface karst forms, the level of concentration around the mean vector orientation increases. Primary karst forms point out poor concentration along the longitudinal direction whereas complex forms are clearly concentrated along the WNW-ESE direction. Hence, only after clumping of the primary forms into the complex ones, the convergence of the surface karst forms direction with the direction of the main faults in the Meso-Cenozoic complex is visible (after A. Henkiel 1984). The results of the wavelet analysis modified by Rosenberg (2004) have indicated significant directions of the clumping of the surface karst forms. A clear difference in the distribution of these forms in west and east areas is noticed. Whereas the west area is dominated by the W-E, NW-SE, N-S directions, the karst forms in the east are concentrated along the NE-SW direction. The semivariogram analysis has confirmed the importance of the W-E and NE-SW directions. Moreover, this analysis has indicated which areas are characterized by the poor karst forms direction. It is a region where the Kock-Wasylów dislocation zone crosses the Święcica dislocation zone in the north-east part of the analysed area. The south-east region is the second such area. The picture of the spatial pattern one confirms the previous results (Dobrowolski 1998) and refers clearly to the structural pattern of this area. Nevertheless, the analyses mentioned above have shown the dominance of the W-E direction over the NW-SE one. The obtained results of the spatial and direction analyses expand and confirm hitherto information about the relation between the spatial pattern of the karst landforms and the tectonics in the interfluve of the middle Wieprz and Bug Rivers.
David C. Chojnacky; Randolph H. Wynne; Christine E. Blinn
2009-01-01
Methodology is lacking to easily map Forest Inventory and Analysis (FIA) inventory statistics for all attribute variables without having to develop separate models and methods for each variable. We developed a mapping method that can directly transfer tabular data to a map on which pixels can be added any way desired to estimate carbon (or any other variable) for a...
ERIC Educational Resources Information Center
Suen, Hoi K.; And Others
The applicability is explored of the Bayesian random-effect analysis of variance (ANOVA) model developed by G. C. Tiao and W. Y. Tan (1966) and a method suggested by H. K. Suen and P. S. Lee (1987) for the generalizability analysis of autocorrelated data. According to Tiao and Tan, if time series data could be described as a first-order…
DNA Extraction from Soils: Old Bias for New Microbial Diversity Analysis Methods
Martin-Laurent, F.; Philippot, L.; Hallet, S.; Chaussod, R.; Germon, J. C.; Soulas, G.; Catroux, G.
2001-01-01
The impact of three different soil DNA extraction methods on bacterial diversity was evaluated using PCR-based 16S ribosomal DNA analysis. DNA extracted directly from three soils showing contrasting physicochemical properties was subjected to amplified ribosomal DNA restriction analysis and ribosomal intergenic spacer analysis (RISA). The obtained RISA patterns revealed clearly that both the phylotype abundance and the composition of the indigenous bacterial community are dependent on the DNA recovery method used. In addition, this effect was also shown in the context of an experimental study aiming to estimate the impact on soil biodiversity of the application of farmyard manure or sewage sludge onto a monoculture of maize for 15 years. PMID:11319122
Minty, B; Ramsey, E D; Davies, I
2000-12-01
A direct aqueous supercritical fluid extraction (SFE) system was developed which can be directly interfaced to an infrared spectrometer for the determination of oil in water. The technique is designed to provide an environmentally clean, automated alternative to established IR methods for oil in water analysis which require the use of restricted organic solvents. The SFE-FTIR method involves minimum sample handling stages, with on-line analysis of a 500 ml water sample being complete within 15 min. Method accuracy for determining water samples spiked with gasoline, white spirit, kerosene, diesel or engine oil was 81-100% with precision (RSD) ranging from 3 to 17%. An independent evaluation determined a 2 ppm limit of quantification for diesel in industrial effluents. The results of a comparative study involving an established IR method and the SFE-FTIR method indicate that oil levels calculated using an accepted equation which includes coefficients derived from reference hydrocarbon standards may result in significant errors. A new approach permitted the derivation of quantification coefficients for the SFE-FTIR analyses which provided improved results. In situations where the identity of the oil to be analysed is known, a rapid off-line SFE-FTIR system calibration procedure was developed and successfully applied to various oils. An optional in-line silica gel clean-up procedure incorporated within the SFE-FTIR system enables the same water sample to be analysed for total oil content including vegetable oils and selectively for petroleum oil content within a total of 20 min. At the end of an analysis the SFE system is cleaned using an in situ 3 min clean cycle.
NASA Technical Reports Server (NTRS)
Ratcliffe, James G.; Jackson, Wade C.
2008-01-01
A simple analysis method has been developed for predicting the residual compressive strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compressive loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.
NASA Technical Reports Server (NTRS)
Ratcliffe, James G.; Jackson, Wade C.
2008-01-01
A simple analysis method has been developed for predicting the residual compression strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compression loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.
USDA-ARS?s Scientific Manuscript database
The butanol-HCl spectrophotometric assay is widely used to quantify extractable and insoluble forms of condensed tannin (CT, syn. proanthocyanidin) in foods, feeds, and foliage of herbaceous and woody plants. However, this method underestimates total CT content when applied directly to plant materia...
The Relationship between Food Insecurity and Obesity in Rural Childbearing Women
ERIC Educational Resources Information Center
Olson, Christine M.; Strawderman, Myla S.
2008-01-01
Context: While food insecurity and obesity have been shown to be positively associated in women, little is known about the direction of the causal relationship between these 2 constructs. Purpose: To clarify the direction of the causal relationship between food insecurity and obesity. Methods: Chi-square and logistic regression analysis of data…
A Comparison of Pyramidal Staff Training and Direct Staff Training in Community-Based Day Programs
ERIC Educational Resources Information Center
Haberlin, Alayna T.; Beauchamp, Ken; Agnew, Judy; O'Brien, Floyd
2012-01-01
This study evaluated two methods of training staff who were working with individuals with developmental disabilities: pyramidal training and consultant-led training. In the pyramidal training, supervisors were trained in the principles of applied behavior analysis (ABA) and in delivering feedback. The supervisors then trained their direct-care…
Donnelly, Aoife; Misstear, Bruce; Broderick, Brian
2011-02-15
Background concentrations of nitrogen dioxide (NO(2)) are not constant but vary temporally and spatially. The current paper presents a powerful tool for the quantification of the effects of wind direction and wind speed on background NO(2) concentrations, particularly in cases where monitoring data are limited. In contrast to previous studies which applied similar methods to sites directly affected by local pollution sources, the current study focuses on background sites with the aim of improving methods for predicting background concentrations adopted in air quality modelling studies. The relationship between measured NO(2) concentration in air at three such sites in Ireland and locally measured wind direction has been quantified using nonparametric regression methods. The major aim was to analyse a method for quantifying the effects of local wind direction on background levels of NO(2) in Ireland. The method was expanded to include wind speed as an added predictor variable. A Gaussian kernel function is used in the analysis and circular statistics employed for the wind direction variable. Wind direction and wind speed were both found to have a statistically significant effect on background levels of NO(2) at all three sites. Frequently environmental impact assessments are based on short term baseline monitoring producing a limited dataset. The presented non-parametric regression methods, in contrast to the frequently used methods such as binning of the data, allow concentrations for missing data pairs to be estimated and distinction between spurious and true peaks in concentrations to be made. The methods were found to provide a realistic estimation of long term concentration variation with wind direction and speed, even for cases where the data set is limited. Accurate identification of the actual variation at each location and causative factors could be made, thus supporting the improved definition of background concentrations for use in air quality modelling studies. Copyright © 2010 Elsevier B.V. All rights reserved.
An accuracy measurement method for star trackers based on direct astronomic observation
Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping
2016-01-01
Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412
NASA Astrophysics Data System (ADS)
Tapimo, Romuald; Tagne Kamdem, Hervé Thierry; Yemele, David
2018-03-01
A discrete spherical harmonics method is developed for the radiative transfer problem in inhomogeneous polarized planar atmosphere illuminated at the top by a collimated sunlight while the bottom reflects the radiation. The method expands both the Stokes vector and the phase matrix in a finite series of generalized spherical functions and the resulting vector radiative transfer equation is expressed in a set of polar directions. Hence, the polarized characteristics of the radiance within the atmosphere at any polar direction and azimuthal angle can be determined without linearization and/or interpolations. The spatial dependent of the problem is solved using the spectral Chebyshev method. The emergent and transmitted radiative intensity and the degree of polarization are predicted for both Rayleigh and Mie scattering. The discrete spherical harmonics method predictions for optical thin atmosphere using 36 streams are found in good agreement with benchmark literature results. The maximum deviation between the proposed method and literature results and for polar directions \\vert μ \\vert ≥0.1 is less than 0.5% and 0.9% for the Rayleigh and Mie scattering, respectively. These deviations for directions close to zero are about 3% and 10% for Rayleigh and Mie scattering, respectively.
An accuracy measurement method for star trackers based on direct astronomic observation.
Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping
2016-03-07
Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.
Woo, Kang-Lyung
2005-01-01
Low molecular weight alcohols including fusel oil were determined using diethyl ether extraction and capillary gas chromatography. Twelve kinds of alcohols were successfully resolved on the HP-FFAP (polyethylene glycol) capillary column. The diethyl ether extraction method was very useful for the analysis of alcohols in alcoholic beverages and biological samples with excellent cleanliness of the resulting chromatograms and high sensitivity compared to the direct injection method. Calibration graphs for all standard alcohols showed good linearity in the concentration range used, 0.001-2% (w/v) for all alcohols. Salting out effects were significant (p < 0.01) for the low molecular weight alcohols methanol, isopropanol, propanol, 2-butanol, n-butanol and ethanol, but not for the relatively high molecular weight alcohols amyl alcohol, isoamyl alcohol, and heptanol. The coefficients of variation of the relative molar responses were less than 5% for all of the alcohols. The limits of detection and quantitation were 1-5 and 10-60 microg/L for the diethyl ether extraction method, and 10-50 and 100-350 microg/L for the direct injection method, respectively. The retention times and relative retention times of standard alcohols were significantly shifted in the direct injection method when the injection volumes were changed, even with the same analysis conditions, but they were not influenced in the diethyl ether extraction method. The recoveries by the diethyl ether extraction method were greater than 95% for all samples and greater than 97% for biological samples.
A round robin approach to the analysis of bisphenol a (BPA) in human blood samples
2014-01-01
Background Human exposure to bisphenol A (BPA) is ubiquitous, yet there are concerns about whether BPA can be measured in human blood. This Round Robin was designed to address this concern through three goals: 1) to identify collection materials, reagents and detection apparatuses that do not contribute BPA to serum; 2) to identify sensitive and precise methods to accurately measure unconjugated BPA (uBPA) and BPA-glucuronide (BPA-G), a metabolite, in serum; and 3) to evaluate whether inadvertent hydrolysis of BPA-G occurs during sample handling and processing. Methods Four laboratories participated in this Round Robin. Laboratories screened materials to identify BPA contamination in collection and analysis materials. Serum was spiked with concentrations of uBPA and/or BPA-G ranging from 0.09-19.5 (uBPA) and 0.5-32 (BPA-G) ng/mL. Additional samples were preserved unspiked as ‘environmental’ samples. Blinded samples were provided to laboratories that used LC/MSMS to simultaneously quantify uBPA and BPA-G. To determine whether inadvertent hydrolysis of BPA metabolites occurred, samples spiked with only BPA-G were analyzed for the presence of uBPA. Finally, three laboratories compared direct and indirect methods of quantifying BPA-G. Results We identified collection materials and reagents that did not introduce BPA contamination. In the blinded spiked sample analysis, all laboratories were able to distinguish low from high values of uBPA and BPA-G, for the whole spiked sample range and for those samples spiked with the three lowest concentrations (0.5-3.1 ng/ml). By completion of the Round Robin, three laboratories had verified methods for the analysis of uBPA and two verified for the analysis of BPA-G (verification determined by: 4 of 5 samples within 20% of spiked concentrations). In the analysis of BPA-G only spiked samples, all laboratories reported BPA-G was the majority of BPA detected (92.2 – 100%). Finally, laboratories were more likely to be verified using direct methods than indirect ones using enzymatic hydrolysis. Conclusions Sensitive and accurate methods for the direct quantification of uBPA and BPA-G were developed in multiple laboratories and can be used for the analysis of human serum samples. BPA contamination can be controlled during sample collection and inadvertent hydrolysis of BPA conjugates can be avoided during sample handling. PMID:24690217
Sánchez, Beatriz; Rodríguez, Mar; Casado, Eva M; Martín, Alberto; Córdoba, Juan J
2008-12-01
A variety of previously established mechanical and chemical treatments to achieve fungal cell lysis combined with a semiautomatic system operated by a vacuum pump were tested to obtain DNA extract to be directly used in randomly amplified polymorphic DNA (RAPD)-PCR to differentiate cyclopiazonic acid-producing and -nonproducing mold strains. A DNA extraction method that includes digestion with proteinase K and lyticase prior to using a mortar and pestle grinding and a semiautomatic vacuum system yielded DNA of high quality in all the fungal strains and species tested, at concentrations ranging from 17 to 89 ng/microl in 150 microl of the final DNA extract. Two microliters of DNA extracted with this method was directly used for RAPD-PCR using primer (GACA)4. Reproducible RAPD fingerprints showing high differences between producer and nonproducer strains were observed. These differences in the RAPD patterns did not differentiate all the strains tested in clusters by cyclopiazonic acid production but may be very useful to distinguish cyclopiazonic acid producer strains from nonproducer strains by a simple RAPD analysis. Thus, the DNA extracts obtained could be used directly without previous purification and quantification for RAPD analysis to differentiate cyclopiazonic acid producer from nonproducer mold strains. This combined analysis could be adaptable to other toxigenic fungal species to enable differentiation of toxigenic and non-toxigenic molds, a procedure of great interest in food safety.
Method for high resolution magnetic resonance analysis using magic angle technique
Wind, Robert A.; Hu, Jian Zhi
2003-11-25
A method of performing a magnetic resonance analysis of a biological object that includes placing the biological object in a main magnetic field and in a radio frequency field, the main magnetic field having a static field direction; rotating the biological object at a rotational frequency of less than about 100 Hz around an axis positioned at an angle of about 54.degree.44' relative to the main magnetic static field direction; pulsing the radio frequency to provide a sequence that includes a magic angle turning pulse segment; and collecting data generated by the pulsed radio frequency. According to another embodiment, the radio frequency is pulsed to provide a sequence capable of producing a spectrum that is substantially free of spinning sideband peaks.
Zhu, Hongbin; Wang, Chunyan; Qi, Yao; Song, Fengrui; Liu, Zhiqiang; Liu, Shuying
2012-11-08
This study presents a novel and rapid method to identify chemical markers for the quality control of Radix Aconiti Preparata, a world widely used traditional herbal medicine. In the method, the samples with a fast extraction procedure were analyzed using direct analysis in real time mass spectrometry (DART MS) combined with multivariate data analysis. At present, the quality assessment approach of Radix Aconiti Preparata was based on the two processing methods recorded in Chinese Pharmacopoeia for the purpose of reducing the toxicity of Radix Aconiti and ensuring its clinical therapeutic efficacy. In order to ensure the safety and effectivity in clinical use, the processing degree of Radix Aconiti should be well controlled and assessed. In the paper, hierarchical cluster analysis and principal component analysis were performed to evaluate the DART MS data of Radix Aconiti Preparata samples in different processing times. The results showed that the well processed Radix Aconiti Preparata, unqualified processed and the raw Radix Aconiti could be clustered reasonably corresponding to their constituents. The loading plot shows that the main chemical markers having the most influence on the discrimination amongst the qualified and unqualified samples were mainly some monoester diterpenoid aconitines and diester diterpenoid aconitines, i.e. benzoylmesaconine, hypaconitine, mesaconitine, neoline, benzoylhypaconine, benzoylaconine, fuziline, aconitine and 10-OH-mesaconitine. The established DART MS approach in combination with multivariate data analysis provides a very flexible and reliable method for quality assessment of toxic herbal medicine. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang
2018-04-01
Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.
A Direct Cell Quenching Method for Cell-Culture Based Metabolomics
A crucial step in metabolomic analysis of cellular extracts is the cell quenching process. The conventional method first uses trypsin to detach cells from their growth surface. This inevitably changes the profile of cellular metabolites since the detachment of cells from the extr...
DETERMINING BERYLLIUM IN DRINKING WATER BY GRAPHITE FURNACE ATOMIC ABSORPTION SPECTROSCOPY
A direct graphite furnace atomic absorption spectroscopy method for the analysis of beryllium in drinking water has been derived from a method for determining beryllium in urine. Ammonium phosphomolybdate and ascorbic acid were employed as matrix modifiers. The matrix modifiers s...
Determination of residual solvents in pharmaceuticals by thermal desorption-GC/MS.
Hashimoto, K; Urakami, K; Fujiwara, Y; Terada, S; Watanabe, C
2001-05-01
A novel method for the determination of residual solvents in pharmaceuticals by thermal desorption (TD)-GC/MS has been established. A programmed temperature pyrolyzer (double shot pyrolyzer) is applied for the TD. This method does not require any sample pretreatment and allows very small amounts of the sample. Directly desorbed solvents from intact pharmaceuticals (ca. 1 mg) in the desorption cup (5 mm x 3.8 mm i.d.) were cryofocused at the head of a capillary column prior to a GC/MS analysis. The desorption temperature was set at a point about 20 degrees C higher than the melting point of each sample individually, and held for 3 min. The analytical results using 7 different pharmaceuticals were in agreement with those obtained by direct injection (DI) of the solution, followed by USP XXIII. This proposed TD-GC/MS method was demonstrated to be very useful for the identification and quantification of residual solvents. Furthermore, this method was simple, allowed rapid analysis and gave good repeatability.
Interferometric Laser Scanner for Direction Determination
Kaloshin, Gennady; Lukin, Igor
2016-01-01
In this paper, we explore the potential capabilities of new laser scanning-based method for direction determination. The method for fully coherent beams is extended to the case when interference pattern is produced in the turbulent atmosphere by two partially coherent sources. The performed theoretical analysis identified the conditions under which stable pattern may form on extended paths of 0.5–10 km in length. We describe a method for selecting laser scanner parameters, ensuring the necessary operability range in the atmosphere for any possible turbulence characteristics. The method is based on analysis of the mean intensity of interference pattern, formed by two partially coherent sources of optical radiation. Visibility of interference pattern is estimated as a function of propagation pathlength, structure parameter of atmospheric turbulence, and spacing of radiation sources, producing the interference pattern. It is shown that, when atmospheric turbulences are moderately strong, the contrast of interference pattern of laser scanner may ensure its applicability at ranges up to 10 km. PMID:26805841
Interferometric Laser Scanner for Direction Determination.
Kaloshin, Gennady; Lukin, Igor
2016-01-21
In this paper, we explore the potential capabilities of new laser scanning-based method for direction determination. The method for fully coherent beams is extended to the case when interference pattern is produced in the turbulent atmosphere by two partially coherent sources. The performed theoretical analysis identified the conditions under which stable pattern may form on extended paths of 0.5-10 km in length. We describe a method for selecting laser scanner parameters, ensuring the necessary operability range in the atmosphere for any possible turbulence characteristics. The method is based on analysis of the mean intensity of interference pattern, formed by two partially coherent sources of optical radiation. Visibility of interference pattern is estimated as a function of propagation pathlength, structure parameter of atmospheric turbulence, and spacing of radiation sources, producing the interference pattern. It is shown that, when atmospheric turbulences are moderately strong, the contrast of interference pattern of laser scanner may ensure its applicability at ranges up to 10 km.
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
Rapid analysis of fertilizers by the direct-reading thermometric method.
Sajó, I; Sipos, B
1972-05-01
The authors have developed rapid methods for the determination of the main components of fertilizers, namely phosphate, potassium and nitrogen fixed in various forms. In the absence of magnesium ions phosphate is precipitated with magnesia mixture; in the presence of magnesium ions ammonium phosphomolybdate is precipitated and the excess of molybdate is reacted with hydrogen peroxide. Potassium is determined by precipitation with silico-fluoride. For nitrogen fixed as ammonium salts the ammonium ions are condensed in a basic solution with formalin to hexamethylenetetramine; for nitrogen fixed as carbamide the latter is decomposed with sodium nitrite; for nitrogen fixed as nitrate the latter is reduced with titanium(III). In each case the temperature change of the test solution is measured. Practically all essential components of fertilizers may be determined by direct-reading thermometry; with this method and special apparatus the time of analysis is reduced to at most about 15 min for any determination.
Electro-quasistatic analysis of an electrostatic induction micromotor using the cell method.
Monzón-Verona, José Miguel; Santana-Martín, Francisco Jorge; García-Alonso, Santiago; Montiel-Nelson, Juan Antonio
2010-01-01
An electro-quasistatic analysis of an induction micromotor has been realized by using the Cell Method. We employed the direct Finite Formulation (FF) of the electromagnetic laws, hence, avoiding a further discretization. The Cell Method (CM) is used for solving the field equations at the entire domain (2D space) of the micromotor. We have reformulated the field laws in a direct FF and analyzed physical quantities to make explicit the relationship between magnitudes and laws. We applied a primal-dual barycentric discretization of the 2D space. The electric potential has been calculated on each node of the primal mesh using CM. For verification purpose, an analytical electric potential equation is introduced as reference. In frequency domain, results demonstrate the error in calculating potential quantity is neglected (<3‰). In time domain, the potential value in transient state tends to the steady state value.
Electro-Quasistatic Analysis of an Electrostatic Induction Micromotor Using the Cell Method
Monzón-Verona, José Miguel; Santana-Martín, Francisco Jorge; García–Alonso, Santiago; Montiel-Nelson, Juan Antonio
2010-01-01
An electro-quasistatic analysis of an induction micromotor has been realized by using the Cell Method. We employed the direct Finite Formulation (FF) of the electromagnetic laws, hence, avoiding a further discretization. The Cell Method (CM) is used for solving the field equations at the entire domain (2D space) of the micromotor. We have reformulated the field laws in a direct FF and analyzed physical quantities to make explicit the relationship between magnitudes and laws. We applied a primal-dual barycentric discretization of the 2D space. The electric potential has been calculated on each node of the primal mesh using CM. For verification purpose, an analytical electric potential equation is introduced as reference. In frequency domain, results demonstrate the error in calculating potential quantity is neglected (<3‰). In time domain, the potential value in transient state tends to the steady state value. PMID:22163397
Hughes, I
1998-09-24
The direct analysis of selected components from combinatorial libraries by sensitive methods such as mass spectrometry is potentially more efficient than deconvolution and tagging strategies since additional steps of resynthesis or introduction of molecular tags are avoided. A substituent selection procedure is described that eliminates the mass degeneracy commonly observed in libraries prepared by "split-and-mix" methods, without recourse to high-resolution mass measurements. A set of simple rules guides the choice of substituents such that all components of the library have unique nominal masses. Additional rules extend the scope by ensuring that characteristic isotopic mass patterns distinguish isobaric components. The method is applicable to libraries having from two to four varying substituent groups and can encode from a few hundred to several thousand components. No restrictions are imposed on the manner in which the "self-coded" library is synthesized or screened.
Development of an Empirical Methods for Predicting Jet Mixing Noise of Cold Flow Rectangular Jets
NASA Technical Reports Server (NTRS)
Russell, James W.
1999-01-01
This report presents an empirical method for predicting the jet mixing noise levels of cold flow rectangular jets. The report presents a detailed analysis of the methodology used in development of the prediction method. The empirical correlations used are based on narrow band acoustic data for cold flow rectangular model nozzle tests conducted in the NASA Langley Jet Noise Laboratory. There were 20 separate nozzle test operating conditions. For each operating condition 60 Hz bandwidth microphone measurements were made over a frequency range from 0 to 60,000 Hz. Measurements were performed at 16 polar directivity angles ranging from 45 degrees to 157.5 degrees. At each polar directivity angle, measurements were made at 9 azimuth directivity angles. The report shows the methods employed to remove screech tones and shock noise from the data in order to obtain the jet mixing noise component. The jet mixing noise was defined in terms of one third octave band spectral content, polar and azimuth directivity, and overall power level. Empirical correlations were performed over the range of test conditions to define each of these jet mixing noise parameters as a function of aspect ratio, jet velocity, and polar and azimuth directivity angles. The report presents the method for predicting the overall power level, the average polar directivity, the azimuth directivity and the location and shape of the spectra for jet mixing noise of cold flow rectangular jets.
Yamashita, Yasunobu; Ueda, Kazuki; Kawaji, Yuki; Tamura, Takashi; Itonaga, Masahiro; Yoshida, Takeichi; Maeda, Hiroki; Magari, Hirohito; Maekita, Takao; Iguchi, Mikitaka; Tamai, Hideyuki; Ichinose, Masao; Kato, Jun
2016-01-01
Background/Aims Transpapillary forceps biopsy is an effective diagnostic technique in patients with biliary stricture. This prospective study aimed to determine the usefulness of the wire-grasping method as a new technique for forceps biopsy. Methods Consecutive patients with biliary stricture or irregularities of the bile duct wall were randomly allocated to either the direct or wire-grasping method group. In the wire-grasping method, forceps in the duodenum grasps a guide-wire placed into the bile duct beforehand, and then, the forceps are pushed through the papilla without endoscopic sphincterotomy. In the direct method, forceps are directly pushed into the bile duct alongside a guide-wire. The primary endpoint was the success rate of obtaining specimens suitable for adequate pathological examination. Results In total, 32 patients were enrolled, and 28 (14 in each group) were eligible for analysis. The success rate was significantly higher using the wire-grasping method than the direct method (100% vs 50%, p=0.016). Sensitivity and accuracy for the diagnosis of cancer were comparable in patients with the successful procurement of biopsy specimens between the two methods (91% vs 83% and 93% vs 86%, respectively). Conclusions The wire-grasping method is useful for diagnosing patients with biliary stricture or irregularities of the bile duct wall. PMID:27021502
NASA Astrophysics Data System (ADS)
Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.
1997-02-01
The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.
NASA Technical Reports Server (NTRS)
Padavala, Satyasrinivas; Palazzolo, Alan B.; Vallely, Pat; Ryan, Steve
1994-01-01
An improved dynamic analysis for liquid annular seals with arbitrary profile based on a method, first proposed by Nelson and Nguyen, is presented. An improved first order solution that incorporates a continuous interpolation of perturbed quantities in the circumferential direction, is presented. The original method uses an approximation scheme for circumferential gradients, based on Fast Fourier Transforms (FFT). A simpler scheme based on cubic splines is found to be computationally more efficient with better convergence at higher eccentricities. A new approach of computing dynamic coefficients based on external specified load is introduced. This improved analysis is extended to account for arbitrarily varying seal profile in both axial and circumferential directions. An example case of an elliptical seal with varying degrees of axial curvature is analyzed. A case study based on actual operating clearances of an interstage seal of the Space Shuttle Main Engine High Pressure Oxygen Turbopump is presented.
Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers
NASA Astrophysics Data System (ADS)
Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen
2017-04-01
Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.
Pseudorange error analysis for precise indoor positioning system
NASA Astrophysics Data System (ADS)
Pola, Marek; Bezoušek, Pavel
2017-05-01
There is a currently developed system of a transmitter indoor localization intended for fire fighters or members of rescue corps. In this system the transmitter of an ultra-wideband orthogonal frequency-division multiplexing signal position is determined by the time difference of arrival method. The position measurement accuracy highly depends on the directpath signal time of arrival estimation accuracy which is degraded by severe multipath in complicated environments such as buildings. The aim of this article is to assess errors in the direct-path signal time of arrival determination caused by multipath signal propagation and noise. Two methods of the direct-path signal time of arrival estimation are compared here: the cross correlation method and the spectral estimation method.
15 CFR 806.13 - Miscellaneous.
Code of Federal Regulations, 2010 CFR
2010-01-01
... ECONOMIC ANALYSIS, DEPARTMENT OF COMMERCE DIRECT INVESTMENT SURVEYS § 806.13 Miscellaneous. (a) Accounting methods and records. Generally accepted U.S. accounting principles should be followed. Corporations should... filed with the Bureau of Economic Analysis; this should be the copy with the address label if such a...
Methods for Estimating the Uncertainty in Emergy Table-Form Models
Emergy studies have suffered criticism due to the lack of uncertainty analysis and this shortcoming may have directly hindered the wider application and acceptance of this methodology. Recently, to fill this gap, the sources of uncertainty in emergy analysis were described and an...
Wei, Zhenwei; Xiong, Xingchuang; Guo, Chengan; Si, Xingyu; Zhao, Yaoyao; He, Muyi; Yang, Chengdui; Xu, Wei; Tang, Fei; Fang, Xiang; Zhang, Sichun; Zhang, Xinrong
2015-11-17
We had developed pulsed direct current electrospray ionization mass spectrometry (pulsed-dc-ESI-MS) for systematically profiling and determining components in small volume sample. Pulsed-dc-ESI utilized constant high voltage to induce the generation of single polarity pulsed electrospray remotely. This method had significantly boosted the sample economy, so as to obtain several minutes MS signal duration from merely picoliter volume sample. The elongated MS signal duration enable us to collect abundant MS(2) information on interested components in a small volume sample for systematical analysis. This method had been successfully applied for single cell metabolomics analysis. We had obtained 2-D profile of metabolites (including exact mass and MS(2) data) from single plant and mammalian cell, concerning 1034 components and 656 components for Allium cepa and HeLa cells, respectively. Further identification had found 162 compounds and 28 different modification groups of 141 saccharides in a single Allium cepa cell, indicating pulsed-dc-ESI a powerful tool for small volume sample systematical analysis.
Kelley, Laura C.; Wang, Zheng; Hagedorn, Elliott J.; Wang, Lin; Shen, Wanqing; Lei, Shijun; Johnson, Sam A.; Sherwood, David R.
2018-01-01
Cell invasion through basement membrane (BM) barriers is crucial during development, leukocyte trafficking, and for the spread of cancer. Despite its importance in normal and diseased states, the mechanisms that direct invasion are poorly understood, in large part because of the inability to visualize dynamic cell-basement membrane interactions in vivo. This protocol describes multi-channel time-lapse confocal imaging of anchor cell invasion in live C. elegans. Methods presented include outline slide preparation and worm growth synchronization (15 min), mounting (20 min), image acquisition (20-180 min), image processing (20 min), and quantitative analysis (variable timing). Images acquired enable direct measurement of invasive dynamics including invadopodia formation, cell membrane protrusions, and BM removal. This protocol can be combined with genetic analysis, molecular activity probes, and optogenetic approaches to uncover molecular mechanisms underlying cell invasion. These methods can also be readily adapted for real-time analysis of cell migration, basement membrane turnover, and cell membrane dynamics by any worm laboratory. PMID:28880279
An edge-directed interpolation method for fetal spine MR images.
Yu, Shaode; Zhang, Rui; Wu, Shibin; Hu, Jiani; Xie, Yaoqin
2013-10-10
Fetal spinal magnetic resonance imaging (MRI) is a prenatal routine for proper assessment of fetus development, especially when suspected spinal malformations occur while ultrasound fails to provide details. Limited by hardware, fetal spine MR images suffer from its low resolution.High-resolution MR images can directly enhance readability and improve diagnosis accuracy. Image interpolation for higher resolution is required in clinical situations, while many methods fail to preserve edge structures. Edge carries heavy structural messages of objects in visual scenes for doctors to detect suspicions, classify malformations and make correct diagnosis. Effective interpolation with well-preserved edge structures is still challenging. In this paper, we propose an edge-directed interpolation (EDI) method and apply it on a group of fetal spine MR images to evaluate its feasibility and performance. This method takes edge messages from Canny edge detector to guide further pixel modification. First, low-resolution (LR) images of fetal spine are interpolated into high-resolution (HR) images with targeted factor by bi-linear method. Then edge information from LR and HR images is put into a twofold strategy to sharpen or soften edge structures. Finally a HR image with well-preserved edge structures is generated. The HR images obtained from proposed method are validated and compared with that from other four EDI methods. Performances are evaluated from six metrics, and subjective analysis of visual quality is based on regions of interest (ROI). All these five EDI methods are able to generate HR images with enriched details. From quantitative analysis of six metrics, the proposed method outperforms the other four from signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM) and mutual information (MI) with seconds-level time consumptions (TC). Visual analysis of ROI shows that the proposed method maintains better consistency in edge structures with the original images. The proposed method classifies edge orientations into four categories and well preserves structures. It generates convincing HR images with fine details and is suitable in real-time situations. Iterative curvature-based interpolation (ICBI) method may result in crisper edges, while the other three methods are sensitive to noise and artifacts.
A noninvasive, direct real-time PCR method for sex determination in multiple avian species
Brubaker, Jessica L.; Karouna-Renier, Natalie K.; Chen, Yu; Jenko, Kathryn; Sprague, Daniel T.; Henry, Paula F.P.
2011-01-01
Polymerase chain reaction (PCR)-based methods to determine the sex of birds are well established and have seen few modifications since they were first introduced in the 1990s. Although these methods allowed for sex determination in species that were previously difficult to analyse, they were not conducive to high-throughput analysis because of the laboriousness of DNA extraction and gel electrophoresis. We developed a high-throughput real-time PCR-based method for analysis of sex in birds, which uses noninvasive sample collection and avoids DNA extraction and gel electrophoresis.
An evaluation method for nanoscale wrinkle
NASA Astrophysics Data System (ADS)
Liu, Y. P.; Wang, C. G.; Zhang, L. M.; Tan, H. F.
2016-06-01
In this paper, a spectrum-based wrinkling analysis method via two-dimensional Fourier transformation is proposed aiming to solve the difficulty of nanoscale wrinkle evaluation. It evaluates the wrinkle characteristics including wrinkling wavelength and direction simply using a single wrinkling image. Based on this method, the evaluation results of nanoscale wrinkle characteristics show agreement with the open experimental results within an error of 6%. It is also verified to be appropriate for the macro wrinkle evaluation without scale limitations. The spectrum-based wrinkling analysis is an effective method for nanoscale evaluation, which contributes to reveal the mechanism of nanoscale wrinkling.
Methods for Human Dehydration Measurement
NASA Astrophysics Data System (ADS)
Trenz, Florian; Weigel, Robert; Hagelauer, Amelie
2018-03-01
The aim of this article is to give a broad overview of current methods for the identification and quantification of the human dehydration level. Starting off from most common clinical setups, including vital parameters and general patients' appearance, more quantifiable results from chemical laboratory and electromagnetic measurement methods will be reviewed. Different analysis methods throughout the electromagnetic spectrum, ranging from direct current (DC) conductivity measurements up to neutron activation analysis (NAA), are discussed on the base of published results. Finally, promising technologies, which allow for an integration of a dehydration assessment system in a compact and portable way, will be spotted.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1993-01-01
In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.
Li, Yongtao; Whitaker, Joshua S; McCarty, Christina L
2012-07-06
A large volume direct aqueous injection method was developed for the analysis of iodinated haloacetic acids in drinking water by using reversed-phase liquid chromatography/electrospray ionization/tandem mass spectrometry in the negative ion mode. Both the external and internal standard calibration methods were studied for the analysis of monoiodoacetic acid, chloroiodoacetic acid, bromoiodoacetic acid, and diiodoacetic acid in drinking water. The use of a divert valve technique for the mobile phase solvent delay, along with isotopically labeled analogs used as internal standards, effectively reduced and compensated for the ionization suppression typically caused by coexisting common inorganic anions. Under the optimized method conditions, the mean absolute and relative recoveries resulting from the replicate fortified deionized water and chlorinated drinking water analyses were 83-107% with a relative standard deviation of 0.7-11.7% and 84-111% with a relative standard deviation of 0.8-12.1%, respectively. The method detection limits resulting from the external and internal standard calibrations, based on seven fortified deionized water replicates, were 0.7-2.3 ng/L and 0.5-1.9 ng/L, respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
Multivariate analysis of longitudinal rates of change.
Bryan, Matthew; Heagerty, Patrick J
2016-12-10
Longitudinal data allow direct comparison of the change in patient outcomes associated with treatment or exposure. Frequently, several longitudinal measures are collected that either reflect a common underlying health status, or characterize processes that are influenced in a similar way by covariates such as exposure or demographic characteristics. Statistical methods that can combine multivariate response variables into common measures of covariate effects have been proposed in the literature. Current methods for characterizing the relationship between covariates and the rate of change in multivariate outcomes are limited to select models. For example, 'accelerated time' methods have been developed which assume that covariates rescale time in longitudinal models for disease progression. In this manuscript, we detail an alternative multivariate model formulation that directly structures longitudinal rates of change and that permits a common covariate effect across multiple outcomes. We detail maximum likelihood estimation for a multivariate longitudinal mixed model. We show via asymptotic calculations the potential gain in power that may be achieved with a common analysis of multiple outcomes. We apply the proposed methods to the analysis of a trivariate outcome for infant growth and compare rates of change for HIV infected and uninfected infants. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Zhi, Ruicong; Zhao, Lei; Shi, Jingye
2016-07-01
Developing innovative products that satisfy various groups of consumers helps a company maintain a leading market share. The hedonic scale and just-about-right (JAR) scale are 2 popular methods for hedonic assessment and product diagnostics. In this paper, we chose to study flavored liquid milk because it is one of the most necessary nutrient sources in China. The hedonic scale and JAR scale methods were combined to provide directional information for flavored liquid milk optimization. Two methods of analysis (penalty analysis and partial least squares regression on dummy variables) were used and the results were compared. This paper had 2 aims: (1) to investigate consumer preferences of basic flavor attributes of milk from various cities in China; and (2) to determine the improvement direction for specific products and the ideal overall liking for consumers in various cities. The results showed that consumers in China have local-specific requirements for characteristics of flavored liquid milk. Furthermore, we provide a consumer-oriented product design method to improve sensory quality according to the preference of particular consumers. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Computer-assisted techniques to evaluate fringe patterns
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Bhat, Gopalakrishna K.
1992-01-01
Strain measurement using interferometry requires an efficient way to extract the desired information from interferometric fringes. Availability of digital image processing systems makes it possible to use digital techniques for the analysis of fringes. In the past, there have been several developments in the area of one dimensional and two dimensional fringe analysis techniques, including the carrier fringe method (spatial heterodyning) and the phase stepping (quasi-heterodyning) technique. This paper presents some new developments in the area of two dimensional fringe analysis, including a phase stepping technique supplemented by the carrier fringe method and a two dimensional Fourier transform method to obtain the strain directly from the discontinuous phase contour map.
NASA Astrophysics Data System (ADS)
Ganiev, R. F.; Reviznikov, D. L.; Rogoza, A. N.; Slastushenskiy, Yu. V.; Ukrainskiy, L. E.
2017-03-01
A description of a complex approach to investigation of nonlinear wave processes in the human cardiovascular system based on a combination of high-precision methods of measuring a pulse wave, mathematical methods of processing the empirical data, and methods of direct numerical modeling of hemodynamic processes in an arterial tree is given.
Currently there are no EPA reference sampling methods that have been promulgated for measuring stack emissions of Hg from coal combustion sources, however, EPA Method 29 is most commonly applied. The draft ASTM Ontario Hydro Method for measuring oxidized, elemental, particulate-b...
NASA Astrophysics Data System (ADS)
Calvín, Pablo; Villalaín, Juan J.; Casas-Sainz, Antonio M.; Tauxe, Lisa; Torres-López, Sara
2017-12-01
The Small Circle (SC) methods are founded upon two main starting hypotheses: (i) the analyzed sites were remagnetized contemporarily, acquiring the same paleomagnetic direction. (ii) The deviation of the acquired paleomagnetic signal from its original direction is only due to tilting around the bedding strike and therefore the remagnetization direction must be located on a small circle (SC) whose axis is the strike of bedding and contains the in situ paleomagnetic direction. Therefore, if we analyze several sites (with different bedding strikes) their SCs will intersect in the remagnetization direction. The SC methods have two applications: (1) the Small Circle Intersection (SCI) method is capable of providing adequate approximations to the expected paleomagnetic direction when dealing with synfolding remagnetizations. By comparing the SCI direction with that predicted from an apparent polar wander path, the (re)magnetization can be dated. (2) Once the remagnetization direction is known, the attitude of the beds (at each site) can be restored to the moment of the acquisition of the remagnetization, showing a palinspastic reconstructuion of the structure. Some caveats are necessary under more complex tectonic scenarios, in which SC-based methods can lead to erroneous interpretations. However, the graphical output of the methods tries to avoid 'black-box' effects and can minimize misleading interpretations or even help, for example, to identify local or regional vertical axis rotations. In any case, the methods must be used with caution and always considering the knowledge of the tectonic frame. In this paper, some utilities for SCs analysis are automatized by means of a new Python code and a new technique for defining the uncertainty of the solution is proposed. With pySCu the SCs methods can be easily and quickly applied, obtaining firstly a set of text files containing all calculated information and subsequently generating a graphical output on the fly.
Sensitivity analysis of discrete structural systems: A survey
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.
1984-01-01
Methods for calculating sensitivity derivatives for discrete structural systems are surveyed, primarily covering literature published during the past two decades. Methods are described for calculating derivatives of static displacements and stresses, eigenvalues and eigenvectors, transient structural response, and derivatives of optimum structural designs with respect to problem parameters. The survey is focused on publications addressed to structural analysis, but also includes a number of methods developed in nonstructural fields such as electronics, controls, and physical chemistry which are directly applicable to structural problems. Most notable among the nonstructural-based methods are the adjoint variable technique from control theory, and the Green's function and FAST methods from physical chemistry.
Zhu, Zhaozhong; Anttila, Verneri; Smoller, Jordan W; Lee, Phil H
2018-01-01
Advances in recent genome wide association studies (GWAS) suggest that pleiotropic effects on human complex traits are widespread. A number of classic and recent meta-analysis methods have been used to identify genetic loci with pleiotropic effects, but the overall performance of these methods is not well understood. In this work, we use extensive simulations and case studies of GWAS datasets to investigate the power and type-I error rates of ten meta-analysis methods. We specifically focus on three conditions commonly encountered in the studies of multiple traits: (1) extensive heterogeneity of genetic effects; (2) characterization of trait-specific association; and (3) inflated correlation of GWAS due to overlapping samples. Although the statistical power is highly variable under distinct study conditions, we found the superior power of several methods under diverse heterogeneity. In particular, classic fixed-effects model showed surprisingly good performance when a variant is associated with more than a half of study traits. As the number of traits with null effects increases, ASSET performed the best along with competitive specificity and sensitivity. With opposite directional effects, CPASSOC featured the first-rate power. However, caution is advised when using CPASSOC for studying genetically correlated traits with overlapping samples. We conclude with a discussion of unresolved issues and directions for future research.
NASA Astrophysics Data System (ADS)
Xu, Guanshui
2000-12-01
A direct finite-element model is developed for the full-scale analysis of the electromechanical phenomena involved in surface acoustic wave (SAW) devices. The equations of wave propagation in piezoelectric materials are discretized using the Galerkin method, in which an implicit algorithm of the Newmark family with unconditional stability is implemented. The Rayleigh damping coefficients are included in the elements near the boundary to reduce the influence of the reflection of waves. The performance of the model is demonstrated by the analysis of the frequency response of a Y-Z lithium niobate filter with two uniform ports, with emphasis on the influence of the number of electrodes. The frequency response of the filter is obtained through the Fourier transform of the impulse response, which is solved directly from the finite-element simulation. It shows that the finite-element results are in good agreement with the characteristic frequency response of the filter predicted by the simple phase-matching argument. The ability of the method to evaluate the influence of the bulk waves at the high-frequency end of the filter passband and the influence of the number of electrodes on insertion loss is noteworthy. We conclude that the direct finite-element analysis of SAW devices can be used as an effective tool for the design of high-performance SAW devices. Some practical computational challenges of finite-element modeling of SAW devices are discussed.
Kaniu, M I; Angeyo, K H; Mwala, A K; Mangala, M J
2012-06-04
Precision agriculture depends on the knowledge and management of soil quality (SQ), which calls for affordable, simple and rapid but accurate analysis of bioavailable soil nutrients. Conventional SQ analysis methods are tedious and expensive. We demonstrate the utility of a new chemometrics-assisted energy dispersive X-ray fluorescence and scattering (EDXRFS) spectroscopy method we have developed for direct rapid analysis of trace 'bioavailable' macronutrients (i.e. C, N, Na, Mg, P) in soils. The method exploits, in addition to X-ray fluorescence, the scatter peaks detected from soil pellets to develop a model for SQ analysis. Spectra were acquired from soil samples held in a Teflon holder analyzed using (109)Cd isotope source EDXRF spectrometer for 200 s. Chemometric techniques namely principal component analysis (PCA), partial least squares (PLS) and artificial neural networks (ANNs) were utilized for pattern recognition based on fluorescence and Compton scatter peaks regions, and to develop multivariate quantitative calibration models based on Compton scatter peak respectively. SQ analyses were realized with high CMD (R(2)>0.9) and low SEP (0.01% for N and Na, 0.05% for C, 0.08% for Mg and 1.98 μg g(-1) for P). Comparison of predicted macronutrients with reference standards using a one-way ANOVA test showed no statistical difference at 95% confidence level. To the best of the authors' knowledge, this is the first time that an XRF method has demonstrated utility in trace analysis of macronutrients in soil or related matrices. Copyright © 2012 Elsevier B.V. All rights reserved.
Methods for Force Analysis of Overconstrained Parallel Mechanisms: A Review
NASA Astrophysics Data System (ADS)
Liu, Wen-Lan; Xu, Yun-Dou; Yao, Jian-Tao; Zhao, Yong-Sheng
2017-11-01
The force analysis of overconstrained PMs is relatively complex and difficult, for which the methods have always been a research hotspot. However, few literatures analyze the characteristics and application scopes of the various methods, which is not convenient for researchers and engineers to master and adopt them properly. A review of the methods for force analysis of both passive and active overconstrained PMs is presented. The existing force analysis methods for these two kinds of overconstrained PMs are classified according to their main ideas. Each category is briefly demonstrated and evaluated from such aspects as the calculation amount, the comprehensiveness of considering limbs' deformation, and the existence of explicit expressions of the solutions, which provides an important reference for researchers and engineers to quickly find a suitable method. The similarities and differences between the statically indeterminate problem of passive overconstrained PMs and that of active overconstrained PMs are discussed, and a universal method for these two kinds of overconstrained PMs is pointed out. The existing deficiencies and development directions of the force analysis methods for overconstrained systems are indicated based on the overview.
Kriston, Levente; Meister, Ramona
2014-03-01
Judging applicability (relevance) of meta-analytical findings to particular clinical decision-making situations remains challenging. We aimed to describe an evidence synthesis method that accounts for possible uncertainty regarding applicability of the evidence. We conceptualized uncertainty regarding applicability of the meta-analytical estimates to a decision-making situation as the result of uncertainty regarding applicability of the findings of the trials that were included in the meta-analysis. This trial-level applicability uncertainty can be directly assessed by the decision maker and allows for the definition of trial inclusion probabilities, which can be used to perform a probabilistic meta-analysis with unequal probability resampling of trials (adaptive meta-analysis). A case study with several fictitious decision-making scenarios was performed to demonstrate the method in practice. We present options to elicit trial inclusion probabilities and perform the calculations. The result of an adaptive meta-analysis is a frequency distribution of the estimated parameters from traditional meta-analysis that provides individually tailored information according to the specific needs and uncertainty of the decision maker. The proposed method offers a direct and formalized combination of research evidence with individual clinical expertise and may aid clinicians in specific decision-making situations. Copyright © 2014 Elsevier Inc. All rights reserved.
Jiao, Jiao; Gai, Qing-Yan; Zhang, Lin; Wang, Wei; Luo, Meng; Zu, Yuan-Gang; Fu, Yu-Jie
2015-06-01
A new, simple and efficient analysis method for fresh plant in vitro cultures-namely, high-speed homogenization coupled with microwave-assisted extraction (HSH-MAE) followed by liquid chromatography-tandem mass spectrometry (LC-MS/MS)-was developed for simultaneous determination of six alkaloids and eight flavonoids in Isatis tinctoria hairy root cultures (ITHRCs). Compared with traditional methods, the proposed HSH-MAE offers the advantages of easy manipulation, higher efficiency, energy saving, and reduced waste. Cytohistological studies were conducted to clarify the mechanism of HSH-MAE at cellular/tissue levels. Moreover, the established LC-MS/MS method showed excellent linearity, precision, repeatability, and reproducibility. The HSH-MAE-LC-MS/MS method was also successfully applied for screening high-productivity ITHRCs. Overall, this study opened up a new avenue for the direct determination of secondary metabolic profiles from fresh plant in vitro cultures, which is valuable for improving quality control of plant cell/organ cultures and sheds light on the metabolomic analysis of biological samples. Graphical Abstract HSH-MAE-LC-MS/MS opened up a new avenue for the direct determination of alkaloids and flavonoids in fresh Isatis tinctoria hairy root cultures.
NASA Astrophysics Data System (ADS)
Simpson, R. N.; Liu, Z.; Vázquez, R.; Evans, J. A.
2018-06-01
We outline the construction of compatible B-splines on 3D surfaces that satisfy the continuity requirements for electromagnetic scattering analysis with the boundary element method (method of moments). Our approach makes use of Non-Uniform Rational B-splines to represent model geometry and compatible B-splines to approximate the surface current, and adopts the isogeometric concept in which the basis for analysis is taken directly from CAD (geometry) data. The approach allows for high-order approximations and crucially provides a direct link with CAD data structures that allows for efficient design workflows. After outlining the construction of div- and curl-conforming B-splines defined over 3D surfaces we describe their use with the electric and magnetic field integral equations using a Galerkin formulation. We use Bézier extraction to accelerate the computation of NURBS and B-spline terms and employ H-matrices to provide accelerated computations and memory reduction for the dense matrices that result from the boundary integral discretization. The method is verified using the well known Mie scattering problem posed over a perfectly electrically conducting sphere and the classic NASA almond problem. Finally, we demonstrate the ability of the approach to handle models with complex geometry directly from CAD without mesh generation.
NASA Astrophysics Data System (ADS)
Mallah, Muhammad Ali; Sherazi, Syed Tufail Hussain; Bhanger, Muhammad Iqbal; Mahesar, Sarfaraz Ahmed; Bajeer, Muhammad Ashraf
2015-04-01
A transmission FTIR spectroscopic method was developed for direct, inexpensive and fast quantification of paracetamol content in solid pharmaceutical formulations. In this method paracetamol content is directly analyzed without solvent extraction. KBr pellets were formulated for the acquisition of FTIR spectra in transmission mode. Two chemometric models: simple Beer's law and partial least squares employed over the spectral region of 1800-1000 cm-1 for quantification of paracetamol content had a regression coefficient of (R2) of 0.999. The limits of detection and quantification using FTIR spectroscopy were 0.005 mg g-1 and 0.018 mg g-1, respectively. Study for interference was also done to check effect of the excipients. There was no significant interference from the sample matrix. The results obviously showed the sensitivity of transmission FTIR spectroscopic method for pharmaceutical analysis. This method is green in the sense that it does not require large volumes of hazardous solvents or long run times and avoids prior sample preparation.
Numerical Investigation of Laminar-Turbulent Transition in a Flat Plate Wake
1990-03-02
Difference Methods , Oxford University Press. 3 Swarztrauber, P. N. (1977). "The Methods of Cyclic Reduction, Fourier Analysis and The FACR Algorithm for...streamwise and trans- verse directions. For the temporal discretion, a combination of ADI, Crank-Nicolson,Iand Adams-Rashforth methods is employed. The...41 U 5. NUMERICAL METHOD ...... .................... .. 50 3 5.1 Spanwise Spectral Approximation ... .............. ... 50 5.1.1 Fourier
New Analysis Methods Estimate a Critical Property of Ethanol Fuel Blends
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-03-01
To date there have been no adequate methods for measuring the heat of vaporization of complex mixtures. This research developed two separate methods for measuring this key property of ethanol and gasoline blends, including the ability to estimate heat of vaporization at multiple temperatures. Methods for determining heat of vaporization of gasoline-ethanol blends by calculation from a compositional analysis and by direct calorimetric measurement were developed. Direct measurement produced values for pure compounds in good agreement with literature. A range of hydrocarbon gasolines were shown to have heat of vaporization of 325 kJ/kg to 375 kJ/kg. The effect of addingmore » ethanol at 10 vol percent to 50 vol percent was significantly larger than the variation between hydrocarbon gasolines (E50 blends at 650 kJ/kg to 700 kJ/kg). The development of these new and accurate methods allows researchers to begin to both quantify the effect of fuel evaporative cooling on knock resistance, and exploit this effect for combustion of hydrocarbon-ethanol fuel blends in high-efficiency SI engines.« less
The AMIDAS Website: An Online Tool for Direct Dark Matter Detection Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shan, Chung-Lin
2010-02-10
Following our long-erm work on development of model-independent data analysis methods for reconstructing the one-dimensional velocity distribution function of halo WIMPs as well as for determining their mass and couplings on nucleons by using data from direct Dark Matter detection experiments directly, we combined the simulation programs to a compact system: AMIDAS (A Model-Independent Data Analysis System). For users' convenience an online system has also been established at the same time. AMIDAS has the ability to do full Monte Carlo simulations, faster theoretical estimations, as well as to analyze (real) data sets recorded in direct detection experiments without modifying themore » source code. In this article, I give an overview of functions of the AMIDAS code based on the use of its website.« less
A study of commuter airplane design optimization
NASA Technical Reports Server (NTRS)
Keppel, B. V.; Eysink, H.; Hammer, J.; Hawley, K.; Meredith, P.; Roskam, J.
1978-01-01
The usability of the general aviation synthesis program (GASP) was enhanced by the development of separate computer subroutines which can be added as a package to this assembly of computerized design methods or used as a separate subroutine program to compute the dynamic longitudinal, lateral-directional stability characteristics for a given airplane. Currently available analysis methods were evaluated to ascertain those most appropriate for the design functions which the GASP computerized design program performs. Methods for providing proper constraint and/or analysis functions for GASP were developed as well as the appropriate subroutines.
A direct potential fitting RKR method: Semiclassical vs. quantal comparisons
NASA Astrophysics Data System (ADS)
Tellinghuisen, Joel
2016-12-01
Quantal and semiclassical (SC) eigenvalues are compared for three diatomic molecular potential curves: the X state of CO, the X state of Rb2, and the A state of I2. The comparisons show higher levels of agreement than generally recognized, when the SC calculations incorporate a quantum defect correction to the vibrational quantum number, in keeping with the Kaiser modification. One particular aspect of this is better agreement between quantal and SC estimates of the zero-point vibrational energy, supporting the need for the Y00 correction in this context. The pursuit of a direct-potential-fitting (DPF) RKR method is motivated by the notion that some of the limitations of RKR potentials may be innate, from their generation by an exact inversion of approximate quantities: the vibrational energy Gυ and rotational constant Bυ from least-squares analysis of spectroscopic data. In contrast, the DPF RKR method resembles the quantal DPF methods now increasingly used to analyze diatomic spectral data, but with the eigenvalues obtained from SC phase integrals. Application of this method to the analysis of 9500 assigned lines in the I2A ← X spectrum fails to alter the quantal-SC disparities found for the A-state RKR curve from a previous analysis. On the other hand, the SC method can be much faster than the quantal method in exploratory work with different potential functions, where it is convenient to use finite-difference methods to evaluate the partial derivatives required in nonlinear fitting.
ERIC Educational Resources Information Center
Hazell, Philip L.; Kohn, Michael R.; Dickson, Ruth; Walton, Richard J.; Granger, Renee E.; van Wyk, Gregory W.
2011-01-01
Objective: Previous studies comparing atomoxetine and methylphenidate to treat ADHD symptoms have been equivocal. This noninferiority meta-analysis compared core ADHD symptom response between atomoxetine and methylphenidate in children and adolescents. Method: Selection criteria included randomized, controlled design; duration 6 weeks; and…
Cost Analysis of Online Courses. AIR 2000 Annual Forum Paper.
ERIC Educational Resources Information Center
Milam, John H., Jr.
This paper presents a complex, hybrid, method of cost analysis of online courses, which incorporates data on expenditures; student/course enrollment; departmental consumption/contribution; space utilization/opportunity costs; direct non-personnel costs; computing support; faculty/staff workload; administrative overhead at the department, dean, and…
A Mathematics Education Comparative Analysis of ALEKS Technology and Direct Classroom Instruction
ERIC Educational Resources Information Center
Mertes, Emily Sue
2013-01-01
Assessment and LEarning in Knowledge Spaces (ALEKS), a technology-based mathematics curriculum, was piloted in the 2012-2013 school year at a Minnesota rural public middle school. The goal was to find an equivalent or more effective mathematics teaching method than traditional direct instruction. The purpose of this quantitative study was to…
USDA-ARS?s Scientific Manuscript database
The butanol-HCl spectrophotometric assay is widely used for quantifying extractable and insoluble condensed tannins (CT, syn. proanthocyanidins) in foods, feeds, and foliage of herbaceous and woody plants, but the method underestimates total CT content when applied directly to plant material. To imp...
Inductively coupled plasma mass spectrometry (ICP/MS) with direct injection nebulization (DIN) was used to evaluate novel impinger solution compositions capable of capturing elemental mercury (Hgo) in EPA Method 5 type sampling. An iodine based impinger solutoin proved to be ver...
Berardi, Alberto; Bisharat, Lorina; Blaibleh, Anaheed; Pavoni, Lucia; Cespi, Marco
2018-06-20
Tablets disintegration is often the result of a size expansion of the tablets. In this study, we quantified the extent and direction of size expansion of tablets during disintegration, using readily available techniques, i.e. a digital camera and a public domain image analysis software. After validating the method, the influence of disintegrants concentration and diluents type on kinetics and mechanisms of disintegration were studied. Tablets containing diluent, disintegrant (sodium starch glycolate-SSG, crospovidone-PVPP or croscarmellose sodium-CCS) and lubricant were prepared by direct compression. Projected area and aspect ratio of the tablets were monitored using image analysis techniques. The developed method could describe the kinetics and mechanisms of disintegration qualitatively and quantitatively. SSG and PVPP acted purely by swelling and shape recovery mechanisms. Instead, CCS worked by a combination of both mechanisms, the extent of which changed depending on its concentration and the diluent type. We anticipate that the method described here could provide a framework for the routine screening of tablets disintegration using readily available equipment. Copyright © 2018. Published by Elsevier Inc.
A Self-Directed Method for Cell-Type Identification and Separation of Gene Expression Microarrays
Zuckerman, Neta S.; Noam, Yair; Goldsmith, Andrea J.; Lee, Peter P.
2013-01-01
Gene expression analysis is generally performed on heterogeneous tissue samples consisting of multiple cell types. Current methods developed to separate heterogeneous gene expression rely on prior knowledge of the cell-type composition and/or signatures - these are not available in most public datasets. We present a novel method to identify the cell-type composition, signatures and proportions per sample without need for a-priori information. The method was successfully tested on controlled and semi-controlled datasets and performed as accurately as current methods that do require additional information. As such, this method enables the analysis of cell-type specific gene expression using existing large pools of publically available microarray datasets. PMID:23990767
NASA Astrophysics Data System (ADS)
Juretzek, Carina; Hadziioannou, Céline
2014-05-01
Our knowledge about common and different origins of Love and Rayleigh waves observed in the microseism band of the ambient seismic noise field is still limited, including the understanding of source locations and source mechanisms. Multi-component array methods are suitable to address this issue. In this work we use a 3-component beamforming algorithm to obtain source directions and polarization states of the ambient seismic noise field within the primary and secondary microseism bands recorded at the Gräfenberg array in southern Germany. The method allows to distinguish between different polarized waves present in the seismic noise field and estimates Love and Rayleigh wave source directions and their seasonal variations using one year of array data. We find mainly coinciding directions for the strongest acting sources of both wave types at the primary microseism and different source directions at the secondary microseism.
Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio
2018-01-01
Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.
Kittell, David E; Mares, Jesus O; Son, Steven F
2015-04-01
Two time-frequency analysis methods based on the short-time Fourier transform (STFT) and continuous wavelet transform (CWT) were used to determine time-resolved detonation velocities with microwave interferometry (MI). The results were directly compared to well-established analysis techniques consisting of a peak-picking routine as well as a phase unwrapping method (i.e., quadrature analysis). The comparison is conducted on experimental data consisting of transient detonation phenomena observed in triaminotrinitrobenzene and ammonium nitrate-urea explosives, representing high and low quality MI signals, respectively. Time-frequency analysis proved much more capable of extracting useful and highly resolved velocity information from low quality signals than the phase unwrapping and peak-picking methods. Additionally, control of the time-frequency methods is mainly constrained to a single parameter which allows for a highly unbiased analysis method to extract velocity information. In contrast, the phase unwrapping technique introduces user based variability while the peak-picking technique does not achieve a highly resolved velocity result. Both STFT and CWT methods are proposed as improved additions to the analysis methods applied to MI detonation experiments, and may be useful in similar applications.
Shielding of substations against direct lightning strokes by shield wires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhuri, P.
1994-01-01
A new analysis for shielding outdoor substations against direct lightning strokes by shield wires is proposed. The basic assumption of this proposed method is that any lightning stroke which penetrates the shields will cause damage. The second assumption is that a certain level of risk of failure must be accepted, such as one or two failures per 100 years. The proposed method, using electrogeometric model, was applied to design shield wires for two outdoor substations: (1) 161-kV/69-kV station, and (2) 500-kV/161-kV station. The results of the proposed method were also compared with the shielding data of two other substations.
Multivariate Autoregressive Modeling and Granger Causality Analysis of Multiple Spike Trains
Krumin, Michael; Shoham, Shy
2010-01-01
Recent years have seen the emergence of microelectrode arrays and optical methods allowing simultaneous recording of spiking activity from populations of neurons in various parts of the nervous system. The analysis of multiple neural spike train data could benefit significantly from existing methods for multivariate time-series analysis which have proven to be very powerful in the modeling and analysis of continuous neural signals like EEG signals. However, those methods have not generally been well adapted to point processes. Here, we use our recent results on correlation distortions in multivariate Linear-Nonlinear-Poisson spiking neuron models to derive generalized Yule-Walker-type equations for fitting ‘‘hidden” Multivariate Autoregressive models. We use this new framework to perform Granger causality analysis in order to extract the directed information flow pattern in networks of simulated spiking neurons. We discuss the relative merits and limitations of the new method. PMID:20454705
Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S
2017-02-01
B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.
Compensation of hospital-based physicians.
Steinwald, B
1983-01-01
This study is concerned with methods of compensating hospital-based physicians (HBPs) in five medical specialties: anesthesiology, pathology, radiology, cardiology, and emergency medicine. Data on 2232 nonfederal, short-term general hospitals came from a mail questionnaire survey conducted in Fall 1979. The data indicate that numerous compensation methods exist but these methods, without much loss of precision, can be reduced to salary, percentage of department revenue, and fee-for-service. When HBPs are compensated by salary or percentage methods, most patient billing is conducted by the hospital. In contrast, most fee-for-service HBPs bill their patients directly. Determinants of HBP compensation methods are investigated via multinomial logit analysis. This analysis indicates that choice of HBP compensation methods are investigated via multinomial logit analysis. This analysis indicates that choice of HBP compensation methods is sensitive to a number of hospital characteristics and attributes of both the hospital and physicians' services markets. The empirical findings are discussed in light of past conceptual and empirical research on physician compensation, and current policy issues in the health services sector. PMID:6841112
Extracellular matrix directions estimation of the heart on micro-focus x-ray CT volumes
NASA Astrophysics Data System (ADS)
Oda, Hirohisa; Oda, Masahiro; Kitasaka, Takayuki; Akita, Toshiaki; Mori, Kensaku
2017-03-01
In this paper we propose an estimation method of extracellular matrix directions of the heart. Myofiber are surrounded by the myocardial cell sheets whose directions have strong correspondence between heart failure. Estimation of the myocardial cell sheet directions is difficult since they are very thin. Therefore, we estimate the extracellular matrices which are touching to the sheets as if piled up. First, we perform a segmentation of the extracellular matrices by using the Hessian analysis. Each extracellular matrix region has sheet-like shape. We estimate the direction of each extracellular matrix region by the principal component analysis (PCA). In our experiments, mean inclination angles of two normal canine hearts were 50.6 and 46.2 degrees, while the angle of a failing canine heart was 57.4 degrees. This results well fit the anatomical knowledge that failing hearts tend to have vertical myocardical cell sheets.
NASA Technical Reports Server (NTRS)
Cruse, T. A.
1987-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.
1988-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
Masood, Athar; Stark, Ken D; Salem, Norman
2005-10-01
Conventional sample preparation for fatty acid analysis is a complicated, multiple-step process, and gas chromatography (GC) analysis alone can require >1 h per sample to resolve fatty acid methyl esters (FAMEs). Fast GC analysis was adapted to human plasma FAME analysis using a modified polyethylene glycol column with smaller internal diameters, thinner stationary phase films, increased carrier gas linear velocity, and faster temperature ramping. Our results indicated that fast GC analyses were comparable to conventional GC in peak resolution. A conventional transesterification method based on Lepage and Roy was simplified to a one-step method with the elimination of the neutralization and centrifugation steps. A robotics-amenable method was also developed, with lower methylation temperatures and in an open-tube format using multiple reagent additions. The simplified methods produced results that were quantitatively similar and with similar coefficients of variation as compared with the original Lepage and Roy method. The present streamlined methodology is suitable for the direct fatty acid analysis of human plasma, is appropriate for research studies, and will facilitate large clinical trials and make possible population studies.
Oka, Megan; Whiting, Jason
2013-01-01
In Marriage and Family Therapy (MFT), as in many clinical disciplines, concern surfaces about the clinician/researcher gap. This gap includes a lack of accessible, practical research for clinicians. MFT clinical research often borrows from the medical tradition of randomized control trials, which typically use linear methods, or follow procedures distanced from "real-world" therapy. We review traditional research methods and their use in MFT and propose increased use of methods that are more systemic in nature and more applicable to MFTs: process research, dyadic data analysis, and sequential analysis. We will review current research employing these methods, as well as suggestions and directions for further research. © 2013 American Association for Marriage and Family Therapy.
Ueland, Maiken; Blanes, Lucas; Taudte, Regina V; Stuart, Barbara H; Cole, Nerida; Willis, Peter; Roux, Claude; Doble, Philip
2016-03-04
A novel microfluidic paper-based analytical device (μPAD) was designed to filter, extract, and pre-concentrate explosives from soil for direct analysis by a lab on a chip (LOC) device. The explosives were extracted via immersion of wax-printed μPADs directly into methanol soil suspensions for 10min, whereby dissolved explosives travelled upwards into the μPAD circular sampling reservoir. A chad was punched from the sampling reservoir and inserted into a LOC well containing the separation buffer for direct analysis, avoiding any further extraction step. Eight target explosives were separated and identified by fluorescence quenching. The minimum detectable amounts for all eight explosives were between 1.4 and 5.6ng with recoveries ranging from 53-82% from the paper chad, and 12-40% from soil. This method provides a robust and simple extraction method for rapid identification of explosives in complex soil samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Vibration mode analysis of the proton exchange membrane fuel cell stack
NASA Astrophysics Data System (ADS)
Liu, B.; Liu, L. F.; Wei, M. Y.; Wu, C. W.
2016-11-01
Proton exchange membrane fuel cell (PEMFC) stacks usually undergo vibration during packing, transportation, and serving time, in particular for those used in the automobiles or portable equipment. To study the stack vibration response, based on finite element method (FEM), a mode analysis is carried out in the present paper. Using this method, we can distinguish the local vibration from the stack global modes, predict the vibration responses, such as deformed shape and direction, and discuss the effects of the clamping configuration and the clamping force magnitude on vibration modes. It is found that when the total clamping force remains the same, increasing the bolt number can strengthen the stack resistance to vibration in the clamping direction, but cannot obviously strengthen stack resistance to vibration in the translations perpendicular to clamping direction and the three axis rotations. Increasing the total clamping force can increase both of the stack global mode and the bolt local mode frequencies, but will decrease the gasket local mode frequency.
Photothermal method of determining calorific properties of coal
Amer, Nabil M.
1985-01-01
Predetermined amounts of heat are generated within a coal sample (11) by directing pump light pulses (14) of predetermined energy content into a small surface region (16) of the sample (11). A beam (18) of probe light is directed along the sample surface (19) and deflection of the probe beam (18) from thermally induced changes of index of refraction in the fluid medium adjacent the heated region (16) are detected. Deflection amplitude and the phase lag of the deflection, relative to the initiating pump light pulse (14), are indicative of the calorific value and the porosity of the sample (11). The method provides rapid, accurate and non-destructive analysis of the heat producing capabilities of coal samples (11). In the preferred form, sequences of pump light pulses (14) of increasing durations are directed into the sample (11) at each of a series of minute regions (16) situated along a raster scan path (21) enabling detailed analysis of variations of thermal properties at different areas of the sample (11) and at different depths.
Estimating the change in asymptotic direction due to secular changes in the geomagnetic field
NASA Technical Reports Server (NTRS)
Flueckiger, E. O.; Smart, D. F.; Shea, M. A.; Gentile, L. C.; Bathurat, A. A.
1985-01-01
The concept of geomagnetic optics, as described by the asymptotic directions of approach, is extremely useful in the analysis of cosmic radiation data. However, when changes in cutoff occur as a result of evolution in the geomagnetic field, there are corresponding changes in the asymptotic cones of acceptance. A method is introduced of estimating the change in the asymptotic direction of approach for vertically incident cosmic ray particles from a reference set of directions at a specific epoch by considering the change in the geomagnetic cutoff.
Edge directed image interpolation with Bamberger pyramids
NASA Astrophysics Data System (ADS)
Rosiles, Jose Gerardo
2005-08-01
Image interpolation is a standard feature in digital image editing software, digital camera systems and printers. Classical methods for resizing produce blurred images with unacceptable quality. Bamberger Pyramids and filter banks have been successfully used for texture and image analysis. They provide excellent multiresolution and directional selectivity. In this paper we present an edge-directed image interpolation algorithm which takes advantage of the simultaneous spatial-directional edge localization at the subband level. The proposed algorithm outperform classical schemes like bilinear and bicubic schemes from the visual and numerical point of views.
Angeyo, K H; Gari, S; Mustapha, A O; Mangala, J M
2012-11-01
The greatest challenge to material characterization by XRF technique is encountered in direct trace analysis of complex matrices. We exploited partial least squares (PLS) in conjunction with energy dispersive X-ray fluorescence and scattering (EDXRFS) spectrometry to rapidly (200 s) analyze lubricating oils. The PLS-EDXRFS method affords non-invasive quality assurance (QA) analysis of complex matrix liquids as it gave optimistic results for both heavy- and low-Z metal additives. Scatter peaks may further be used for QA characterization via the light elements. Copyright © 2012 Elsevier Ltd. All rights reserved.
Accuracy of Automatic Cephalometric Software on Landmark Identification
NASA Astrophysics Data System (ADS)
Anuwongnukroh, N.; Dechkunakorn, S.; Damrongsri, S.; Nilwarat, C.; Pudpong, N.; Radomsutthisarn, W.; Kangern, S.
2017-11-01
This study was to assess the accuracy of an automatic cephalometric analysis software in the identification of cephalometric landmarks. Thirty randomly selected digital lateral cephalograms of patients undergoing orthodontic treatment were used in this study. Thirteen landmarks (S, N, Or, A-point, U1T, U1A, B-point, Gn, Pog, Me, Go, L1T, and L1A) were identified on the digital image by an automatic cephalometric software and on cephalometric tracing by manual method. Superimposition of printed image and manual tracing was done by registration at the soft tissue profiles. The accuracy of landmarks located by the automatic method was compared with that of the manually identified landmarks by measuring the mean differences of distances of each landmark on the Cartesian plane where X and Y coordination axes passed through the center of ear rod. One-Sample T test was used to evaluate the mean differences. Statistically significant mean differences (p<0.05) were found in 5 landmarks (Or, A-point, Me, L1T, and L1A) in horizontal direction and 7 landmarks (Or, A-point, U1T, U1A, B-point, Me, and L1A) in vertical direction. Four landmarks (Or, A-point, Me, and L1A) showed significant (p<0.05) mean differences in both horizontal and vertical directions. Small mean differences (<0.5mm) were found for S, N, B-point, Gn, and Pog in horizontal direction and N, Gn, Me, and L1T in vertical direction. Large mean differences were found for A-point (3.0 < 3.5mm) in horizontal direction and L1A (>4mm) in vertical direction. Only 5 of 13 landmarks (38.46%; S, N, Gn, Pog, and Go) showed no significant mean difference between the automatic and manual landmarking methods. It is concluded that if this automatic cephalometric analysis software is used for orthodontic diagnosis, the orthodontist must correct or modify the position of landmarks in order to increase the accuracy of cephalometric analysis.
Directivity analysis of meander-line-coil EMATs with a wholly analytical method.
Xie, Yuedong; Liu, Zenghua; Yin, Liyuan; Wu, Jiande; Deng, Peng; Yin, Wuliang
2017-01-01
This paper presents the simulation and experimental study of the radiation pattern of a meander-line-coil EMAT. A wholly analytical method, which involves the coupling of two models: an analytical EM model and an analytical UT model, has been developed to build EMAT models and analyse the Rayleigh waves' beam directivity. For a specific sensor configuration, Lorentz forces are calculated using the EM analytical method, which is adapted from the classic Deeds and Dodd solution. The calculated Lorentz force density are imported to an analytical ultrasonic model as driven point sources, which produce the Rayleigh waves within a layered medium. The effect of the length of the meander-line-coil on the Rayleigh waves' beam directivity is analysed quantitatively and verified experimentally. Copyright © 2016 Elsevier B.V. All rights reserved.
Trapping penguins with entangled B mesons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dadisman, Ryan; Gardner, Susan; Yan, Xinshuai
2016-01-08
Our first direct observation of time-reversal (T) violation in the BBsystem was reported by the BaBar Collaboration, employing the method of Bañuls and Bernabéu. Given this, we generalize their analysis of the time-dependent T-violating asymmetry (AT) to consider different choices of CP tags for which the dominant amplitudes have the same weak phase. As one application, we find that it is possible to measure departures from the universality of sin(2β)directly. If sin(2β)is universal, as in the Standard Model, the method permits the direct determination of penguin effects in these channels. This method, although no longer a strict test of T,more » can yield tests of the sin(2β)universality, or, alternatively, of penguin effects, of much improved precision even with existing data sets.« less
High-resolution electron microscopy and its applications.
Li, F H
1987-12-01
A review of research on high-resolution electron microscopy (HREM) carried out at the Institute of Physics, the Chinese Academy of Sciences, is presented. Apart from the direct observation of crystal and quasicrystal defects for some alloys, oxides, minerals, etc., and the structure determination for some minute crystals, an approximate image-contrast theory named pseudo-weak-phase object approximation (PWPOA), which shows the image contrast change with crystal thickness, is described. Within the framework of PWPOA, the image contrast of lithium ions in the crystal of R-Li2Ti3O7 has been observed. The usefulness of diffraction analysis techniques such as the direct method and Patterson method in HREM is discussed. Image deconvolution and resolution enhancement for weak-phase objects by use of the direct method are illustrated. In addition, preliminary results of image restoration for thick crystals are given.
Lancaster, Cady; Espinoza, Edgard
2012-05-15
International trade of several Dalbergia wood species is regulated by The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). In order to supplement morphological identification of these species, a rapid chemical method of analysis was developed. Using Direct Analysis in Real Time (DART) ionization coupled with Time-of-Flight (TOF) Mass Spectrometry (MS), selected Dalbergia and common trade species were analyzed. Each of the 13 wood species was classified using principal component analysis and linear discriminant analysis (LDA). These statistical data clusters served as reliable anchors for species identification of unknowns. Analysis of 20 or more samples from the 13 species studied in this research indicates that the DART-TOFMS results are reproducible. Statistical analysis of the most abundant ions gave good classifications that were useful for identifying unknown wood samples. DART-TOFMS and LDA analysis of 13 species of selected timber samples and the statistical classification allowed for the correct assignment of unknown wood samples. This method is rapid and can be useful when anatomical identification is difficult but needed in order to support CITES enforcement. Published 2012. This article is a US Government work and is in the public domain in the USA.
Survey of Legionella spp. in Mud Spring Recreation Area
NASA Astrophysics Data System (ADS)
Hsu, B.-M.; Ma, P.-H.; Su, I.-Z.; Chen, N.-S.
2009-04-01
Legionella genera are parasites of FLA, and intracellular bacterial replication within the FLA plays a major role in the transmission of disease. At least 13 FLA species—including Acanthamoeba spp., Naegleria spp., and Hartmannella spp.—support intracellular bacterial replication. In the study, Legionellae were detected with microbial culture or by direct DNA extraction and analysis from concentrated water samples or cultured free-living amoebae, combined with molecular methods that allow the taxonomic identification of these pathogens. The water samples were taken from a mud spring recreation area located in a mud-rock-formation area in southern Taiwan. Legionella were detected in 15 of the 34 samples (44.1%). Four of the 34 samples analyzed by Legionella culture were positive for Legionella, five of 34 were positive for Legionella when analyzed by direct DNA extraction and analysis, and 11 of 34 were positive for amoebae-resistant Legionella when analyzed by FLA culture. Ten samples were shown to be positive for Legionella by one analysis method and five samples were shown to be positive by two analysis methods. However, Legionella was detected in no sample by all three analysis methods. This suggests that the three analysis methods should be used together to detect Legionella in aquatic environments. In this study, L. pneumophila serotype 6 coexisted with A. polyphaga, and two uncultured Legionella spp. coexisted with either H. vermiformis or N. australiensis. Of the unnamed Legionella genotypes detected in six FLA culture samples, three were closely related to L. waltersii and the other three were closely related to L. pneumophila serotype 6. Legionella pneumophila serotype 6, L. drancourtii, and L. waltersii are noted endosymbionts of FLA and are categorized as pathogenic bacteria. This is significant for human health because these Legionella exist within FLA and thus come into contact with typically immunocompromised people.
NASA Astrophysics Data System (ADS)
Valencia-Mora, Ricardo A.; Zavala-Lagunes, Edgar; Bucio, Emilio
2016-07-01
The modification of silicone rubber films (SR) was performed by radiation-induced graft polymerization of thermosensitive poly(N-vinylcaprolactam) (PNVCL) using gamma rays from a Co-60 source. The graft polymerization was obtained by a direct radiation method with doses from 5 to 70 kGy, at monomer concentrations between 5% and 70% in toluene. Grafting was confirmed by infrared, differential scanning calorimetry, thermogravimetric analysis, and swelling studies. The lower critical solution temperature (LCST) of the grafted SR was measured by swelling and differential scanning calorimetry.
Scientist Honored by DOE for Outstanding Research Accomplishments,
passive design tools. The American Society of Heating, Refrigeration and Air Conditioning Engineer's mixed systems. This accomplishment gave the solar energy design community a direct, verifiable method of design manual, Passive Solar Heating Analysis, is an outgrowth of this method. Dr. Balcomb's involvement
Razalas' Grouping Method and Mathematics Achievement
ERIC Educational Resources Information Center
Salazar, Douglas A.
2015-01-01
This study aimed to raise the achievement level of students in Integral Calculus using Direct Instruction with Razalas' Method of Grouping. The study employed qualitative and quantitative analysis relative to data generated by the Achievement Test and Math journal with follow-up interview. Within the framework of the limitations of the study, the…
Child-Parent Interventions for Childhood Anxiety Disorders: A Systematic Review and Meta-Analysis
ERIC Educational Resources Information Center
Brendel, Kristen Esposito; Maynard, Brandy R.
2014-01-01
Objective: This study compared the effects of direct child-parent interventions to the effects of child-focused interventions on anxiety outcomes for children with anxiety disorders. Method: Systematic review methods and meta-analytic techniques were employed. Eight randomized controlled trials examining effects of family cognitive behavior…
Analysis of method of polarization surveying of water surface oil pollution
NASA Technical Reports Server (NTRS)
Zhukov, B. S.
1979-01-01
A method of polarization surveying of oil films on the water surface is analyzed. Model calculations of contrasted oil and water obtained with different orientations of the analyzer are discussed. The model depends on the spectral range, water transparency and oil film, and the selection of observational direction.
Miyazaki, Ryoji; Myougo, Naomi; Mori, Hiroyuki; Akiyama, Yoshinori
2018-01-12
Many proteins form multimeric complexes that play crucial roles in various cellular processes. Studying how proteins are correctly folded and assembled into such complexes in a living cell is important for understanding the physiological roles and the qualitative and quantitative regulation of the complex. However, few methods are suitable for analyzing these rapidly occurring processes. Site-directed in vivo photo-cross-linking is an elegant technique that enables analysis of protein-protein interactions in living cells with high spatial resolution. However, the conventional site-directed in vivo photo-cross-linking method is unsuitable for analyzing dynamic processes. Here, by combining an improved site-directed in vivo photo-cross-linking technique with a pulse-chase approach, we developed a new method that can analyze the folding and assembly of a newly synthesized protein with high spatiotemporal resolution. We demonstrate that this method, named the pulse-chase and in vivo photo-cross-linking experiment (PiXie), enables the kinetic analysis of the formation of an Escherichia coli periplasmic (soluble) protein complex (PhoA). We also used our new technique to investigate assembly/folding processes of two membrane complexes (SecD-SecF in the inner membrane and LptD-LptE in the outer membrane), which provided new insights into the biogenesis of these complexes. Our PiXie method permits analysis of the dynamic behavior of various proteins and enables examination of protein-protein interactions at the level of individual amino acid residues. We anticipate that our new technique will have valuable utility for studies of protein dynamics in many organisms. © 2018 by The American Society for Biochemistry and Molecular Biology, Inc.
Shear, principal, and equivalent strains in equal-channel angular deformation
NASA Astrophysics Data System (ADS)
Xia, K.; Wang, J.
2001-10-01
The shear and principal strains involved in equal channel angular deformation (ECAD) were analyzed using a variety of methods. A general expression for the total shear strain calculated by integrating infinitesimal strain increments gave the same result as that from simple geometric considerations. The magnitude and direction of the accumulated principal strains were calculated based on a geometric and a matrix algebra method, respectively. For an intersecting angle of π/2, the maximum normal strain is 0.881 in the direction at π/8 (22.5 deg) from the longitudinal direction of the material in the exit channel. The direction of the maximum principal strain should be used as the direction of grain elongation. Since the principal direction of strain rotates during ECAD, the total shear strain and principal strains so calculated do not have the same meaning as those in a strain tensor. Consequently, the “equivalent” strain based on the second invariant of a strain tensor is no longer an invariant. Indeed, the equivalent strains calculated using the total shear strain and that using the total principal strains differed as the intensity of deformation increased. The method based on matrix algebra is potentially useful in mathematical analysis and computer calculation of ECAD.
Systems Analysis - a new paradigm and decision support tools for the water framework directive
NASA Astrophysics Data System (ADS)
Bruen, M.
2008-05-01
In the early days of Systems Analysis the focus was on providing tools for optimisation, modelling and simulation for use by experts. Now there is a recognition of the need to develop and disseminate tools to assist in making decisions, negotiating compromises and communicating preferences that can easily be used by stakeholders without the need for specialist training. The Water Framework Directive (WFD) requires public participation and thus provides a strong incentive for progress in this direction. This paper places the new paradigm in the context of the classical one and discusses some of the new approaches which can be used in the implementation of the WFD. These include multi-criteria decision support methods suitable for environmental problems, adaptive management, cognitive mapping, social learning and cooperative design and group decision-making. Concordance methods (such as ELECTRE) and the Analytical Hierarchy Process (AHP) are identified as multi-criteria methods that can be readily integrated into Decision Support Systems (DSS) that deal with complex environmental issues with very many criteria, some of which are qualitative. The expanding use of the new paradigm provides an opportunity to observe and learn from the interaction of stakeholders with the new technology and to assess its effectiveness.
Wang, Xinyu; Gao, Jing-Lin; Du, Chaohui; An, Jing; Li, MengJiao; Ma, Haiyan; Zhang, Lina; Jiang, Ye
2017-01-01
People today have a stronger interest in the risk of biosafety in clinical bioanalysis. A safe, simple, effective method of preparation is needed urgently. To improve biosafety of clinical analysis, we used antiviral drugs of adefovir and tenofovir as model drugs and developed a safe pretreatment method combining sealing technique with direct injection technique. The inter- and intraday precision (RSD %) of the method were <4%, and the extraction recoveries ranged from 99.4 to 100.7%. Meanwhile, the results showed that standard solution could be used to prepare calibration curve instead of spiking plasma, acquiring more accuracy result. Compared with traditional methods, the novel method not only improved biosecurity of the pretreatment method significantly, but also achieved several advantages including higher precision, favorable sensitivity and satisfactory recovery. With these highly practical and desirable characteristics, the novel method may become a feasible platform in bioanalysis.
Load calculation on the nozzle in a flue gas desulphurization system
NASA Astrophysics Data System (ADS)
Róbert, Olšiak; Zoltán, Fuszko; Zoltán, Csuka
2017-09-01
The desulphurization system is used to remove sulfur oxides from exhaust, so-called flue gases through absorbing them via the sprayed suspension. The suspension delivered from the pump system to the atmospheric bi-directional double hollow cone nozzle has the prescribed working pressure. The unknown mechanical load on the solid body of the nozzle is present through the change of moment due to the flow of the suspension through the bi-directional outflow areas [1], [4]. The calculation of the acting forces and torques in the 3 directions was carried out with the methods of computational fluid dynamics (CFD) in the software ANSYS Fluent. The geometric model of the flow areas of the nozzle were created with the methods of reverse engineering. The computational mesh required by the CFD solver was created, and its quality verified with the standard criteria. The used boundary conditions were defined by the hydraulic parameters of the pump system, the properties of the suspension present in the hydraulic system were specified by sample analysis. The post-processed and analyzed results of the CFD calculation, the pressure-field and the velocity magnitudes in particular directions were further used as input parameters at the mechanical analysis of the load on the bi-directional nozzle.
Sensitivity analysis of dynamic biological systems with time-delays.
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2010-10-15
Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.
[The SWOT analysis and strategic considerations for the present medical devices' procurement].
Li, Bin; He, Meng-qiao; Cao, Jian-wen
2006-05-01
In this paper, the SWOT analysis method is used to find out the internal strength, weakness, exterior opportunities and threats of the present medical devices' procurements in hospitals and some strategic considerations are suggested as "one direction, two expansions, three changes and four countermeasures".
Methamphetamine (meth) residues from meth syntheses or habitual meth smoking pose human health hazards. State health departments require remediation of meth labs before properties are sold. NIOSH methods for meth analysis require wipe sampling, extraction, cleanup, solvent excha...
Familiarizing with Toy Food: Preliminary Research and Future Directions
ERIC Educational Resources Information Center
Lynch, Meghan
2012-01-01
Objective: A qualitative content analysis of children and parents interacting with toy food in their homes in view of recommendations for developing healthful food preferences. Methods: YouTube videos (n = 101) of children and parents interacting in toy kitchen settings were analyzed using qualitative content analysis. Toy food was categorized…
A STRINGENT COMPARISON OF SAMPLING AND ANALYSIS METHODS FOR VOCS IN AMBIENT AIR
A carefully designed study was conducted during the summer of 1998 to simultaneously collect samples of ambient air by canisters and compare the analysis results to direct sorbent preconcentration results taken at the time of sample collection. A total of 32 1-h sample sets we...
USDA-ARS?s Scientific Manuscript database
A genome-wide association study (GWAS) is the foremost strategy used for finding genes that control human diseases and agriculturally important traits, but it often reports false positives. In contrast, its complementary method, linkage analysis, provides direct genetic confirmation, but with limite...
Content-based fused off-axis object illumination direct-to-digital holography
Price, Jeffery R.
2006-05-02
Systems and methods are described for content-based fused off-axis illumination direct-to-digital holography. A method includes calculating an illumination angle with respect to an optical axis defined by a focusing lens as a function of data representing a Fourier analyzed spatially heterodyne hologram; reflecting a reference beam from a reference mirror at a non-normal angle; reflecting an object beam from an object the object beam incident upon the object at the illumination angle; focusing the reference beam and the object beam at a focal plane of a digital recorder to from the content-based off-axis illuminated spatially heterodyne hologram including spatially heterodyne fringes for Fourier analysis; and digitally recording the content based off-axis illuminated spatially heterodyne hologram including spatially heterodyne fringes for Fourier analysis.
An efficient classification method based on principal component and sparse representation.
Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang
2016-01-01
As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.
Cuddy, L L; Thompson, W F
1992-01-01
In a probe-tone experiment, two groups of listeners--one trained, the other untrained, in traditional music theory--rated the goodness of fit of each of the 12 notes of the chromatic scale to four-voice harmonic sequences. Sequences were 12 simplified excerpts from Bach chorales, 4 nonmodulating, and 8 modulating. Modulations occurred either one or two steps in either the clockwise or the counterclockwise direction on the cycle of fifths. A consistent pattern of probe-tone ratings was obtained for each sequence, with no significant differences between listener groups. Two methods of analysis (Fourier analysis and regression analysis) revealed a directional asymmetry in the perceived key movement conveyed by modulating sequences. For a given modulation distance, modulations in the counterclockwise direction effected a clearer shift in tonal organization toward the final key than did clockwise modulations. The nature of the directional asymmetry was consistent with results reported for identification and rating of key change in the sequences (Thompson & Cuddy, 1989a). Further, according to the multiple-regression analysis, probe-tone ratings did not merely reflect the distribution of tones in the sequence. Rather, ratings were sensitive to the temporal structure of the tonal organization in the sequence.
Józwa, Wojciech; Czaczyk, Katarzyna
2012-04-02
Flow cytometry constitutes an alternative for traditional methods of microorganisms identification and analysis, including methods requiring cultivation step. It enables the detection of pathogens and other microorganisms contaminants without the need to culture microbial cells meaning that the sample (water, waste or food e.g. milk, wine, beer) may be analysed directly. This leads to a significant reduction of time required for analysis allowing monitoring of production processes and immediate reaction in case of contamination or any disruption occurs. Apart from the analysis of raw materials or products on different stages of manufacturing process, the flow cytometry seems to constitute an ideal tool for the assessment of microbial contamination on the surface of technological lines. In the present work samples comprising smears from 3 different surfaces of technological lines from fruit and vegetable processing company from Greater Poland were analysed directly with flow cytometer. The measured parameters were forward and side scatter of laser light signals allowing the estimation of microbial cell contents in each sample. Flow cytometric analysis of the surface of food industry production lines enable the preliminary evaluation of microbial contamination within few minutes from the moment of sample arrival without the need of sample pretreatment. The presented method of fl ow cytometric initial evaluation of microbial state of food industry technological lines demonstrated its potential for developing a robust, routine method for the rapid and labor-saving detection of microbial contamination in food industry.
Attitude determination using vector observations: A fast optimal matrix algorithm
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1993-01-01
The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.
NASA Astrophysics Data System (ADS)
Guojun, He; Lin, Guo; Zhicheng, Yu; Xiaojun, Zhu; Lei, Wang; Zhiqiang, Zhao
2017-03-01
In order to reduce the stochastic volatility of supply and demand, and maintain the electric power system's stability after large scale stochastic renewable energy sources connected to grid, the development and consumption should be promoted by marketing means. Bilateral contract transaction model of large users' direct power purchase conforms to the actual situation of our country. Trading pattern of large users' direct power purchase is analyzed in this paper, characteristics of each power generation are summed up, and centralized matching mode is mainly introduced. Through the establishment of power generation enterprises' priority evaluation index system and the analysis of power generation enterprises' priority based on fuzzy clustering, the sorting method of power generation enterprises' priority in trading patterns of large users' direct power purchase is put forward. Suggestions for trading mechanism of large users' direct power purchase are offered by this method, which is good for expand the promotion of large users' direct power purchase further.
Seismic signal time-frequency analysis based on multi-directional window using greedy strategy
NASA Astrophysics Data System (ADS)
Chen, Yingpin; Peng, Zhenming; Cheng, Zhuyuan; Tian, Lin
2017-08-01
Wigner-Ville distribution (WVD) is an important time-frequency analysis technology with a high energy distribution in seismic signal processing. However, it is interfered by many cross terms. To suppress the cross terms of the WVD and keep the concentration of its high energy distribution, an adaptive multi-directional filtering window in the ambiguity domain is proposed. This begins with the relationship of the Cohen distribution and the Gabor transform combining the greedy strategy and the rotational invariance property of the fractional Fourier transform in order to propose the multi-directional window, which extends the one-dimensional, one directional, optimal window function of the optimal fractional Gabor transform (OFrGT) to a two-dimensional, multi-directional window in the ambiguity domain. In this way, the multi-directional window matches the main auto terms of the WVD more precisely. Using the greedy strategy, the proposed window takes into account the optimal and other suboptimal directions, which also solves the problem of the OFrGT, called the local concentration phenomenon, when encountering a multi-component signal. Experiments on different types of both the signal models and the real seismic signals reveal that the proposed window can overcome the drawbacks of the WVD and the OFrGT mentioned above. Finally, the proposed method is applied to a seismic signal's spectral decomposition. The results show that the proposed method can explore the space distribution of a reservoir more precisely.
Mrowka, Ralf; Cimponeriu, Laura; Patzak, Andreas; Rosenblum, Michael G
2003-12-01
Activity of many physiological subsystems has a well-expressed rhythmic character. Often, a dependency between physiological rhythms is established due to interaction between the corresponding subsystems. Traditional methods of data analysis allow one to quantify the strength of interaction but not the causal interrelation that is indispensable for understanding the mechanisms of interaction. Here we present a recently developed method for quantification of coupling direction and apply it to an important problem. Namely, we study the mutual influence of respiratory and cardiovascular rhythms in healthy newborns within the first 6 mo of life in quiet and active sleep. We find an age-related change of the coupling direction: the interaction is nearly symmetric during the first days and becomes practically unidirectional (from respiration to heart rhythm) at the age of 6 mo. Next, we show that the direction of interaction is mainly determined by respiratory frequency. If the latter is less than approximately 0.6 Hz, the interaction occurs dominantly from respiration to heart. With higher respiratory frequencies that only occur at very young ages, the dominating direction is less pronounced or even abolished. The observed dependencies are not related to sleep stage, suggesting that the coupling direction is determined by system-inherent dynamical processes, rather than by functional modulations. The directional analysis may be applied to other interacting narrow band oscillatory systems, e.g., in the central nervous system. Thus it is an important step forward in revealing and understanding causal mechanisms of interactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeung, Yu-Hong; Pothen, Alex; Halappanavar, Mahantesh
We present an augmented matrix approach to update the solution to a linear system of equations when the coefficient matrix is modified by a few elements within a principal submatrix. This problem arises in the dynamic security analysis of a power grid, where operators need to performmore » $N-x$ contingency analysis, i.e., determine the state of the system when up to $x$ links from $N$ fail. Our algorithms augment the coefficient matrix to account for the changes in it, and then compute the solution to the augmented system without refactoring the modified matrix. We provide two algorithms, a direct method, and a hybrid direct-iterative method for solving the augmented system. We also exploit the sparsity of the matrices and vectors to accelerate the overall computation. Our algorithms are compared on three power grids with PARDISO, a parallel direct solver, and CHOLMOD, a direct solver with the ability to modify the Cholesky factors of the coefficient matrix. We show that our augmented algorithms outperform PARDISO (by two orders of magnitude), and CHOLMOD (by a factor of up to 5). Further, our algorithms scale better than CHOLMOD as the number of elements updated increases. The solutions are computed with high accuracy. Our algorithms are capable of computing $N-x$ contingency analysis on a $778K$ bus grid, updating a solution with $x=20$ elements in $$1.6 \\times 10^{-2}$$ seconds on an Intel Xeon processor.« less
Shin, Jeong-Sook; Peng, Lei; Kang, Kyungsu; Choi, Yongsoo
2016-09-09
Direct analysis of prostaglandin-E2 (PGE2) and -D2 (PGD2) produced from a RAW264.7 cell-based reaction was performed by liquid chromatography high-resolution mass spectrometry (LC-HRMS), which was online coupled with turbulent flow chromatography (TFC). The capability of this method to accurately measure PG levels in cell reaction medium containing cytokines or proteins as a reaction byproduct was cross-validated by two conventional methods. Two methods, including an LC-HRMS method after liquid-liquid extraction (LLE) of the sample and a commercial PGE2 enzyme-linked immunosorbent assay (ELISA), showed PGE2 and/or PGD2 levels almost similar to those obtained by TFC LC-HRMS over the reaction time after LPS stimulation. After the cross-validation, significant analytical throughputs, allowing simultaneous screening and potency evaluation of 80 natural products including 60 phytochemicals and 20 natural product extracts for the inhibition of the PGD2 produced in the cell-based inflammatory reaction, were achieved using the TFC LC-HRMS method developed. Among the 60 phytochemicals screened, licochalcone A and formononetin inhibited PGD2 production the most with IC50 values of 126 and 151nM, respectively. For a reference activity, indomethacin and diclofenac were used, measuring IC50 values of 0.64 and 0.21nM, respectively. This method also found a butanol extract of Akebia quinata Decne (AQ) stem as a promising natural product for PGD2 inhibition. Direct and accurate analysis of PGs in the inflammatory cell reaction using the TFC LC-HRMS method developed enables the high-throughput screening and potency evaluation of as many as 320 samples in less than 48h without changing a TFC column. Copyright © 2016 Elsevier B.V. All rights reserved.
Brittnacher, Mitchell J; Heltshe, Sonya L; Hayden, Hillary S; Radey, Matthew C; Weiss, Eli J; Damman, Christopher J; Zisman, Timothy L; Suskind, David L; Miller, Samuel I
2016-01-01
Comparative analysis of gut microbiomes in clinical studies of human diseases typically rely on identification and quantification of species or genes. In addition to exploring specific functional characteristics of the microbiome and potential significance of species diversity or expansion, microbiome similarity is also calculated to study change in response to therapies directed at altering the microbiome. Established ecological measures of similarity can be constructed from species abundances, however methods for calculating these commonly used ecological measures of similarity directly from whole genome shotgun (WGS) metagenomic sequence are lacking. We present an alignment-free method for calculating similarity of WGS metagenomic sequences that is analogous to the Bray-Curtis index for species, implemented by the General Utility for Testing Sequence Similarity (GUTSS) software application. This method was applied to intestinal microbiomes of healthy young children to measure developmental changes toward an adult microbiome during the first 3 years of life. We also calculate similarity of donor and recipient microbiomes to measure establishment, or engraftment, of donor microbiota in fecal microbiota transplantation (FMT) studies focused on mild to moderate Crohn's disease. We show how a relative index of similarity to donor can be calculated as a measure of change in a patient's microbiome toward that of the donor in response to FMT. Because clinical efficacy of the transplant procedure cannot be fully evaluated without analysis methods to quantify actual FMT engraftment, we developed a method for detecting change in the gut microbiome that is independent of species identification and database bias, sensitive to changes in relative abundance of the microbial constituents, and can be formulated as an index for correlating engraftment success with clinical measures of disease. More generally, this method may be applied to clinical evaluation of human microbiomes and provide potential diagnostic determination of individuals who may be candidates for specific therapies directed at alteration of the microbiome.
2012-05-01
methods demonstrated that desorption into solvents suitable for subsequent chemical analysis (into acetonitrile for HPLC analysis or hexane for GC...SPME. Analysis by HPLC with EPA 8310 with fluorescent detection. a) surface water quality criteria (NRWQC) are given for comparison to detection... analysis ) or hexane (for PCB analysis ) was added to the inserts. The vials were then analyzed directly by HPLC (PAHs) or GC-ECD (PCBs). Fiber achieved
Mirabelli, Mario F; Gionfriddo, Emanuela; Pawliszyn, Janusz; Zenobi, Renato
2018-02-12
We evaluated the performance of a dielectric barrier discharge ionization (DBDI) source for pesticide analysis in grape juice, a fairly complex matrix due to the high content of sugars (≈20% w/w) and pigments. A fast sample preparation method based on direct immersion solid-phase microextraction (SPME) was developed, and novel matrix compatible SPME fibers were used to reduce in-source matrix suppression effects. A high resolution LTQ Orbitrap mass spectrometer allowed for rapid quantification in full scan mode. This direct SPME-DBDI-MS approach was proven to be effective for the rapid and direct analysis of complex sample matrices, with limits of detection in the parts-per-trillion (ppt) range and inter- and intra-day precision below 30% relative standard deviation (RSD) for samples spiked at 1, 10 and 10 ng ml -1 , with overall performance comparable or even superior to existing chromatographic approaches.
A rapid method for quantification of 242Pu in urine using extraction chromatography and ICP-MS
Gallardo, Athena Marie; Than, Chit; Wong, Carolyn; ...
2017-01-01
Occupational exposure to plutonium is generally monitored through analysis of urine samples. Typically, plutonium is separated from the sample and other actinides, and the concentration is determined using alpha spectroscopy. Current methods for separations and analysis are lengthy and require long count times. A new method for monitoring occupational exposure levels of plutonium has been developed, which requires fewer steps and overall less time than the alpha spectroscopy method. In this method, the urine is acidified, and a 239Pu internal standard is added. The urine is digested in a microwave oven, and plutonium is separated using an Eichrom TRU Resinmore » column. The plutonium is eluted, and the eluant is injected directly into the Inductively Coupled Plasma–Mass Spectrometer (ICP-MS). Compared to a direct “dilute and shoot” method, a 30-fold improvement in sensitivity is achieved. This method was validated by analyzing several batches of spiked samples. Based on these analyses, a combined standard uncertainty plot, which relates uncertainty to concentration, was produced. As a result, the MDA 95 was calculated to be 7.0 × 10 –7 μg L –1, and the Lc95 was calculated to be 3.5 × 10 –7 μg L –1 for this method.« less
Morgenstern, Hai; Rafaely, Boaz
2018-02-01
Spatial analysis of room acoustics is an ongoing research topic. Microphone arrays have been employed for spatial analyses with an important objective being the estimation of the direction-of-arrival (DOA) of direct sound and early room reflections using room impulse responses (RIRs). An optimal method for DOA estimation is the multiple signal classification algorithm. When RIRs are considered, this method typically fails due to the correlation of room reflections, which leads to rank deficiency of the cross-spectrum matrix. Preprocessing methods for rank restoration, which may involve averaging over frequency, for example, have been proposed exclusively for spherical arrays. However, these methods fail in the case of reflections with equal time delays, which may arise in practice and could be of interest. In this paper, a method is proposed for systems that combine a spherical microphone array and a spherical loudspeaker array, referred to as multiple-input multiple-output systems. This method, referred to as modal smoothing, exploits the additional spatial diversity for rank restoration and succeeds where previous methods fail, as demonstrated in a simulation study. Finally, combining modal smoothing with a preprocessing method is proposed in order to increase the number of DOAs that can be estimated using low-order spherical loudspeaker arrays.
Evaluation of Direct Vapour Equilibration for Stable Isotope Analysis of Plant Water.
NASA Astrophysics Data System (ADS)
Millar, C. B.; McDonnell, J.; Pratt, D.
2017-12-01
The stable isotopes of water (2H and 18O), extracted from plants, have been utilized in a variety of ecohydrological, biogeochemical and climatological studies. The array of methods used to extract water from plants are as varied as the studies themselves. Here we perform a comprehensive inter-method comparison of six plant water extraction techniques: direct vapour equilibration, microwave extraction, two unique versions of cryogenic extraction, centrifugation, and high pressure mechanical squeezing. We applied these methods to four isotopically unique plant portions (heads, stems, leaves and root crown) of spring wheat (Triticum aestivum L.). The spring wheat was grown under controlled conditions with irrigation inputs of a known isotopic composition. Our results show that the methods of extraction return significantly different plant water isotopic signals. Centrifugation, microwave extraction, direct vapour equilibration, and squeezing returned more enriched results. Both cryogenic systems and squeezing returned more depleted results, depending upon the plant portion extracted. While cryogenic extraction is currently the most widely used method in the literature, our results suggest that direct vapor equilibration method outperforms it in terms of accuracy, sample throughput and replicability. More research is now needed with other plant species (especially woody plants) to see how far the findings from this study could be extended.
Aramendía, Maite; Rello, Luis; Vanhaecke, Frank; Resano, Martín
2012-10-16
Collection of biological fluids on clinical filter papers shows important advantages from a logistic point of view, although analysis of these specimens is far from straightforward. Concerning urine analysis, and particularly when direct trace elemental analysis by laser ablation-inductively coupled plasma mass spectrometry (LA-ICPMS) is aimed at, several problems arise, such as lack of sensitivity or different distribution of the analytes on the filter paper, rendering obtaining reliable quantitative results quite difficult. In this paper, a novel approach for urine collection is proposed, which circumvents many of these problems. This methodology consists on the use of precut filter paper discs where large amounts of sample can be retained upon a single deposition. This provides higher amounts of the target analytes and, thus, sufficient sensitivity, and allows addition of an adequate internal standard at the clinical lab prior to analysis, therefore making it suitable for a strategy based on unsupervised sample collection and ulterior analysis at referral centers. On the basis of this sampling methodology, an analytical method was developed for the direct determination of several elements in urine (Be, Bi, Cd, Co, Cu, Ni, Sb, Sn, Tl, Pb, and V) at the low μg L(-1) level by means of LA-ICPMS. The method developed provides good results in terms of accuracy and LODs (≤1 μg L(-1) for most of the analytes tested), with a precision in the range of 15%, fit-for-purpose for clinical control analysis.
Ullah, Md Ahsan; Kim, Ki-Hyun; Szulejko, Jan E; Cho, Jinwoo
2014-04-11
The production of short-chained volatile fatty acids (VFAs) by the anaerobic bacterial digestion of sewage (wastewater) affords an excellent opportunity to alternative greener viable bio-energy fuels (i.e., microbial fuel cell). VFAs in wastewater (sewage) samples are commonly quantified through direct injection (DI) into a gas chromatograph with a flame ionization detector (GC-FID). In this study, the reliability of VFA analysis by the DI-GC method has been examined against a thermal desorption (TD-GC) method. The results indicate that the VFA concentrations determined from an aliquot from each wastewater sample by the DI-GC method were generally underestimated, e.g., reductions of 7% (acetic acid) to 93.4% (hexanoic acid) relative to the TD-GC method. The observed differences between the two methods suggest the possibly important role of the matrix effect to give rise to the negative biases in DI-GC analysis. To further explore this possibility, an ancillary experiment was performed to examine bias patterns of three DI-GC approaches. For instance, the results of the standard addition (SA) method confirm the definite role of matrix effect when analyzing wastewater samples by DI-GC. More importantly, their biases tend to increase systematically with increasing molecular weight and decreasing VFA concentrations. As such, the use of DI-GC method, if applied for the analysis of samples with a complicated matrix, needs a thorough validation to improve the reliability in data acquisition. Copyright © 2014 Elsevier B.V. All rights reserved.
Nejdl, Lukas; Kynicky, Jindrich; Brtnicky, Martin; Vaculovicova, Marketa; Adam, Vojtech
2017-01-01
Toxic metal contamination of the environment is a global issue. In this paper, we present a low-cost and rapid production of amalgam electrodes used for determination of Cd(II) and Pb(II) in environmental samples (soils and wastewaters) by on-site analysis using difference pulse voltammetry. Changes in the electrochemical signals were recorded with a miniaturized potentiostat (width: 80 mm, depth: 54 mm, height: 23 mm) and a portable computer. The limit of detection (LOD) was calculated for the geometric surface of the working electrode 15 mm2 that can be varied as required for analysis. The LODs were 80 ng·mL−1 for Cd(II) and 50 ng·mL−1 for Pb(II), relative standard deviation, RSD ≤ 8% (n = 3). The area of interest (Dolni Rozinka, Czech Republic) was selected because there is a deposit of uranium ore and extreme anthropogenic activity. Environmental samples were taken directly on-site and immediately analysed. Duration of a single analysis was approximately two minutes. The average concentrations of Cd(II) and Pb(II) in this area were below the global average. The obtained values were verified (correlated) by standard electrochemical methods based on hanging drop electrodes and were in good agreement. The advantages of this method are its cost and time effectivity (approximately two minutes per one sample) with direct analysis of turbid samples (soil leach) in a 2 M HNO3 environment. This type of sample cannot be analyzed using the classical analytical methods without pretreatment. PMID:28792458
Segmentation Of Polarimetric SAR Data
NASA Technical Reports Server (NTRS)
Rignot, Eric J. M.; Chellappa, Rama
1994-01-01
Report presents one in continuing series of studies of segmentation of polarimetric synthetic-aperture-radar, SAR, image data into regions. Studies directed toward refinement of method of automated analysis of SAR data.
Pan, Hong-Wei; Li, Wei; Li, Rong-Guo; Li, Yong; Zhang, Yi; Sun, En-Hua
2018-01-01
Rapid identification and determination of the antibiotic susceptibility profiles of the infectious agents in patients with bloodstream infections are critical steps in choosing an effective targeted antibiotic for treatment. However, there has been minimal effort focused on developing combined methods for the simultaneous direct identification and antibiotic susceptibility determination of bacteria in positive blood cultures. In this study, we constructed a lysis-centrifugation-wash procedure to prepare a bacterial pellet from positive blood cultures, which can be used directly for identification by matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry (MALDI-TOF MS) and antibiotic susceptibility testing by the Vitek 2 system. The method was evaluated using a total of 129 clinical bacteria-positive blood cultures. The whole sample preparation process could be completed in <15 min. The correct rate of direct MALDI-TOF MS identification was 96.49% for gram-negative bacteria and 97.22% for gram-positive bacteria. Vitek 2 antimicrobial susceptibility testing of gram-negative bacteria showed an agreement rate of antimicrobial categories of 96.89% with a minor error, major error, and very major error rate of 2.63, 0.24, and 0.24%, respectively. Category agreement of antimicrobials against gram-positive bacteria was 92.81%, with a minor error, major error, and very major error rate of 4.51, 1.22, and 1.46%, respectively. These results indicated that our direct antibiotic susceptibility analysis method worked well compared to the conventional culture-dependent laboratory method. Overall, this fast, easy, and accurate method can facilitate the direct identification and antibiotic susceptibility testing of bacteria in positive blood cultures.
Anderson, Neil W; Buchan, Blake W; Riebe, Katherine M; Parsons, Lauren N; Gnacinski, Stacy; Ledeboer, Nathan A
2012-03-01
Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) is a rapid method for the identification of bacteria. Factors that may alter protein profiles, including growth conditions and presence of exogenous substances, could hinder identification. Bacterial isolates identified by conventional methods were grown on various media and identified using the MALDI Biotyper (Bruker Daltonics, Billerica, MA) using a direct smear method and an acid extraction method. Specimens included 23 Pseudomonas isolates grown on blood agar, Pseudocel (CET), and MacConkey agar (MAC); 20 Staphylococcus isolates grown on blood agar, colistin-nalidixic acid agar (CNA), and mannitol salt agar (MSA); and 25 enteric isolates grown on blood agar, xylose lysine deoxycholate agar (XLD), Hektoen enteric agar (HE), salmonella-shigella agar (SS), and MAC. For Pseudomonas spp., the identification rate to genus using the direct method was 83% from blood, 78% from MAC, and 94% from CET. For Staphylococcus isolates, the identification rate to genus using the direct method was 95% from blood, 75% from CNA, and 95% from MSA. For enteric isolates, the identification rate to genus using the direct method was 100% from blood, 100% from MAC, 100% from XLD, 92% from HE, and 87% from SS. Extraction enhanced identification rates. The direct method of MALDI-TOF analysis of bacteria from selective and differential media yields identifications of varied confidence. Notably, Staphylococci spp. from CNA exhibit low identification rates. Extraction enhances identification rates and is recommended for colonies from this medium.
Barteneva, Natasha S; Vorobjev, Ivan A
2018-01-01
In this paper, we review some of the recent advances in cellular heterogeneity and single-cell analysis methods. In modern research of cellular heterogeneity, there are four major approaches: analysis of pooled samples, single-cell analysis, high-throughput single-cell analysis, and lately integrated analysis of cellular population at a single-cell level. Recently developed high-throughput single-cell genetic analysis methods such as RNA-Seq require purification step and destruction of an analyzed cell often are providing a snapshot of the investigated cell without spatiotemporal context. Correlative analysis of multiparameter morphological, functional, and molecular information is important for differentiation of more uniform groups in the spectrum of different cell types. Simplified distributions (histograms and 2D plots) can underrepresent biologically significant subpopulations. Future directions may include the development of nondestructive methods for dissecting molecular events in intact cells, simultaneous correlative cellular analysis of phenotypic and molecular features by hybrid technologies such as imaging flow cytometry, and further progress in supervised and non-supervised statistical analysis algorithms.
Bilateral foreign direct investment in forest industry between the U.S. and Canada
Rao V Nagubadi; Daowei Zhang
2011-01-01
In this study we examine the trends and various factors influencing bilateral foreign direct investment (FDI) in the U.S. and Canadian forest industry between 1989 and 2008. Using panel data analysis methods, we find that bilateral FDI is positively influenced by depreciation of host country's real exchange rates and exchange rate volatility, and home country...
NASA Technical Reports Server (NTRS)
Gallis, Michael A.; LeBeau, Gerald J.; Boyles, Katie A.
2003-01-01
The Direct Simulation Monte Carlo method was used to provide 3-D simulations of the early entry phase of the Shuttle Orbiter. Undamaged and damaged scenarios were modeled to provide calibration points for engineering "bridging function" type of analysis. Currently the simulation technology (software and hardware) are mature enough to allow realistic simulations of three dimensional vehicles.
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
NASA Astrophysics Data System (ADS)
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
Polarization-direction correlation measurement --- Experimental test of the PDCO methods
NASA Astrophysics Data System (ADS)
Starosta, K.; Morek, T.; Droste, Ch.; Rohoziński, S. G.; Srebrny, J.; Bergstrem, M.; Herskind, B.
1998-04-01
Information about spins and parities of excited states is crucial for nuclear structure studies. In ``in-beam" gamma ray spectroscopy the directional correlation (DCO) or angular distribution measurements are widely used tools for multipolarity assignment; although, it is known that neither of these methods is sensitive to electric or magnetic character of gamma radiation. Multipolarity of gamma rays may be determined when the results of the DCO analysis are combined with the results of linear polarization measurements. The large total efficiency of modern multidetector arrays allows one to carry out coincidence measurements between the polarimeter and the remaining detectors. The aim of the present study was to test experimentally the possibility of polarization-direction correlation measurements using the EUROGAM II array. The studied nucleus was ^164Yb produced in the ^138Ba(^30Si,4n) reaction at beam energies of 150 and 155 MeV. The angular correlation, linear polarization and direction-polarization correlation were measured for the strong transitions in yrast and non yrast cascades. Application of the PDCO analysis to a transition connecting a side band with the yrast band allowed one to rule out most of the ambiguities in multipolarity assignment occuring if one used angular correlations only.
Laser-based methods for the analysis of low molecular weight compounds in biological matrices.
Kiss, András; Hopfgartner, Gérard
2016-07-15
Laser-based desorption and/or ionization methods play an important role in the field of the analysis of low molecular-weight compounds (LMWCs) because they allow direct analysis with high-throughput capabilities. In the recent years there were several new improvements in ionization methods with the emergence of novel atmospheric ion sources such as laser ablation electrospray ionization or laser diode thermal desorption and atmospheric pressure chemical ionization and in sample preparation methods with the development of new matrix compounds for matrix-assisted laser desorption/ionization (MALDI). Also, the combination of ion mobility separation with laser-based ionization methods starts to gain popularity with access to commercial systems. These developments have been driven mainly by the emergence of new application fields such as MS imaging and non-chromatographic analytical approaches for quantification. This review aims to present these new developments in laser-based methods for the analysis of low-molecular weight compounds by MS and several potential applications. Copyright © 2016 Elsevier Inc. All rights reserved.
Cochran, Kristin H.; Barry, Jeremy A.; Muddiman, David C.; Hinks, David
2012-01-01
The forensic analysis of textile fibers uses a variety of techniques from microscopy to spectroscopy. One such technique that is often used to identify the dye(s) within the fiber is mass spectrometry (MS). In the traditional MS method, the dye must be extracted from the fabric and the dye components are separated by chromatography prior to mass spectrometric analysis. Direct analysis of the dye from the fabric allows the omission of the lengthy sample preparation involved in extraction, thereby significantly reducing the overall analysis time. Herein, a direct analysis of dyed textile fabric was performed using the infrared matrix-assisted laser desorption electrospray ionization (IR-MALDESI) source for MS. In MALDESI, an IR laser with wavelength tuned to 2.94 μm is used to desorb the dye from the fabric sample with the aid of water as the matrix. The desorbed dye molecules are then post-ionized by electrospray ionization (ESI). A variety of dye classes were analyzed from various fabrics with little to no sample preparation allowing for the identification of the dye mass and in some cases the fiber polymer. Those dyes that were not detected using MALDESI were also not observed by direct infusion ESI of the dye standard. PMID:23237031
An Integrated Low-Speed Performance and Noise Prediction Methodology for Subsonic Aircraft
NASA Technical Reports Server (NTRS)
Olson, E. D.; Mavris, D. N.
2000-01-01
An integrated methodology has been assembled to compute the engine performance, takeoff and landing trajectories, and community noise levels for a subsonic commercial aircraft. Where feasible, physics-based noise analysis methods have been used to make the results more applicable to newer, revolutionary designs and to allow for a more direct evaluation of new technologies. The methodology is intended to be used with approximation methods and risk analysis techniques to allow for the analysis of a greater number of variable combinations while retaining the advantages of physics-based analysis. Details of the methodology are described and limited results are presented for a representative subsonic commercial aircraft.
Detection of Bi-Directionality in Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert
2012-01-01
An indicator variable was developed for both visualization and detection of bi-directionality in wind tunnel strain-gage balance calibration data. First, the calculation of the indicator variable is explained in detail. Then, a criterion is discussed that may be used to decide which gage outputs of a balance have bi- directional behavior. The result of this analysis could be used, for example, to justify the selection of certain absolute value or other even function terms in the regression model of gage outputs whenever the Iterative Method is chosen for the balance calibration data analysis. Calibration data of NASA s MK40 Task balance is analyzed to illustrate both the calculation of the indicator variable and the application of the proposed criterion. Finally, bi directionality characteristics of typical multi piece, hybrid, single piece, and semispan balances are determined and discussed.
Dynamic sensitivity analysis of biological systems
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2008-01-01
Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time-dependent admissible input. Conclusion By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with time-dependent admissible input. PMID:19091016
da Cunha, Keith C; Riat, Arnaud; Normand, Anne-Cecile; Bosshard, Philipp P; de Almeida, Margarete T G; Piarroux, Renaud; Schrenzel, Jacques; Fontao, Lionel
2018-05-15
Dermatophytes cause human infections limited to keratinized tissues. We showed that the direct transfer method allows reliable identification of non-dermatophytes mould and yeast by MALDI-TOF/MS. We aimed at assessing whether the direct transfer method can be used for dermatophytes and whether an own mass spectra library would be superior to the Bruker library. We used the Bruker Biotyper to build a dermatophyte mass spectra library and assessed its performance by 1/ testing a panel of mass spectrum produced with strains genotypically identified and, 2/ comparing MALDI-TOF/MS identification to morphology-based methods. Identification of dermatophytes using the Bruker library is poor. Our library provided 97% concordance between ITS sequencing and MALDI-TOF/MS analysis with a panel of 1104 spectra corresponding to 276 strains. Direct transfer method using unpolished target plates allowed proper identification of 85% of dermatophytes clinical isolates most of which were common dermatophytes. A homemade dermatophyte MSP library is a prerequisite for accurate identification of species absent in the Bruker library but it also improves identification of species already listed in the database. The direct deposit method can be used to identify the most commonly found dermatophytes such as T. rubrum and T. interdigitale/mentagrophytes by MALDI-TOF/MS. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Darwish, Hany W; Bakheit, Ahmed H; Abdelhameed, Ali S
2016-03-01
Simultaneous spectrophotometric analysis of a multi-component dosage form of olmesartan, amlodipine and hydrochlorothiazide used for the treatment of hypertension has been carried out using various chemometric methods. Multivariate calibration methods include classical least squares (CLS) executed by net analyte processing (NAP-CLS), orthogonal signal correction (OSC-CLS) and direct orthogonal signal correction (DOSC-CLS) in addition to multivariate curve resolution-alternating least squares (MCR-ALS). Results demonstrated the efficiency of the proposed methods as quantitative tools of analysis as well as their qualitative capability. The three analytes were determined precisely using the aforementioned methods in an external data set and in a dosage form after optimization of experimental conditions. Finally, the efficiency of the models was validated via comparison with the partial least squares (PLS) method in terms of accuracy and precision.
Chromý, Vratislav; Vinklárková, Bára; Šprongl, Luděk; Bittová, Miroslava
2015-01-01
We found previously that albumin-calibrated total protein in certified reference materials causes unacceptable positive bias in analysis of human sera. The simplest way to cure this defect is the use of human-based serum/plasma standards calibrated by the Kjeldahl method. Such standards, commutative with serum samples, will compensate for bias caused by lipids and bilirubin in most human sera. To find a suitable primary reference procedure for total protein in reference materials, we reviewed Kjeldahl methods adopted by laboratory medicine. We found two methods recommended for total protein in human samples: an indirect analysis based on total Kjeldahl nitrogen corrected for its nonprotein nitrogen and a direct analysis made on isolated protein precipitates. The methods found will be assessed in a subsequent article.
DISTMIX: direct imputation of summary statistics for unmeasured SNPs from mixed ethnicity cohorts.
Lee, Donghyung; Bigdeli, T Bernard; Williamson, Vernell S; Vladimirov, Vladimir I; Riley, Brien P; Fanous, Ayman H; Bacanu, Silviu-Alin
2015-10-01
To increase the signal resolution for large-scale meta-analyses of genome-wide association studies, genotypes at unmeasured single nucleotide polymorphisms (SNPs) are commonly imputed using large multi-ethnic reference panels. However, the ever increasing size and ethnic diversity of both reference panels and cohorts makes genotype imputation computationally challenging for moderately sized computer clusters. Moreover, genotype imputation requires subject-level genetic data, which unlike summary statistics provided by virtually all studies, is not publicly available. While there are much less demanding methods which avoid the genotype imputation step by directly imputing SNP statistics, e.g. Directly Imputing summary STatistics (DIST) proposed by our group, their implicit assumptions make them applicable only to ethnically homogeneous cohorts. To decrease computational and access requirements for the analysis of cosmopolitan cohorts, we propose DISTMIX, which extends DIST capabilities to the analysis of mixed ethnicity cohorts. The method uses a relevant reference panel to directly impute unmeasured SNP statistics based only on statistics at measured SNPs and estimated/user-specified ethnic proportions. Simulations show that the proposed method adequately controls the Type I error rates. The 1000 Genomes panel imputation of summary statistics from the ethnically diverse Psychiatric Genetic Consortium Schizophrenia Phase 2 suggests that, when compared to genotype imputation methods, DISTMIX offers comparable imputation accuracy for only a fraction of computational resources. DISTMIX software, its reference population data, and usage examples are publicly available at http://code.google.com/p/distmix. dlee4@vcu.edu Supplementary Data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Van der Bij, Sjoukje; Vermeulen, Roel C H; Portengen, Lützen; Moons, Karel G M; Koffijberg, Hendrik
2016-05-01
Exposure to asbestos fibres increases the risk of mesothelioma and lung cancer. Although the vast majority of mesothelioma cases are caused by asbestos exposure, the number of asbestos-related lung cancers is less clear. This number cannot be determined directly as lung cancer causes are not clinically distinguishable but may be estimated using varying modelling methods. We applied three different modelling methods to the Dutch population supplemented with uncertainty ranges (UR) due to uncertainty in model input values. The first method estimated asbestos-related lung cancer cases directly from observed and predicted mesothelioma cases in an age-period-cohort analysis. The second method used evidence on the fraction of lung cancer cases attributable (population attributable risk (PAR)) to asbestos exposure. The third method incorporated risk estimates and population exposure estimates to perform a life table analysis. The three methods varied substantially in incorporated evidence. Moreover, the estimated number of asbestos-related lung cancer cases in the Netherlands between 2011 and 2030 depended crucially on the actual method applied, as the mesothelioma method predicts 17 500 expected cases (UR 7000-57 000), the PAR method predicts 12 150 cases (UR 6700-19 000), and the life table analysis predicts 6800 cases (UR 6800-33 850). The three different methods described resulted in absolute estimates varying by a factor of ∼2.5. These results show that accurate estimation of the impact of asbestos exposure on the lung cancer burden remains a challenge. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Meex, Cécile; Neuville, Florence; Descy, Julie; Huynen, Pascale; Hayette, Marie-Pierre; De Mol, Patrick; Melin, Pierrette
2012-11-01
In cases of bacteraemia, a rapid species identification of the causal agent directly from positive blood culture broths could assist clinicians in the timely targeting of empirical antimicrobial therapy. For this purpose, we evaluated the direct identification of micro-organisms from BacT/ALERT (bioMérieux) anaerobic positive blood cultures without charcoal using the Microflex matrix-assisted laser desorption/ionization (MALDI) time of flight MS (Bruker), after bacterial extraction by using two different methods: the MALDI Sepsityper kit (Bruker) and an in-house saponin lysis method. Bruker's recommended criteria for identification were expanded in this study, with acceptance of the species identification when the first three results with the best matches with the MALDI Biotyper database were identical, whatever the scores were. In total, 107 monobacterial cultures and six polymicrobial cultures from 77 different patients were included in this study. Among monomicrobial cultures, we identified up to the species level 67 and 66 % of bacteria with the MALDI Sepsityper kit and the saponin method, respectively. There was no significant difference between the two extraction methods. The direct species identification was particularly inconclusive for Gram-positive bacteria, as only 58 and 52 % of them were identified to the species level with the MALDI Sepsityper kit and the saponin method, respectively. Results for Gram-negative bacilli were better, with 82.5 and 90 % of correct identification to the species level with the MALDI Sepsityper kit and the saponin method, respectively. No misidentifications were given by the direct procedures when compared with identifications provided by the conventional method. Concerning the six polymicrobial blood cultures, whatever the extraction method used, a correct direct identification was only provided for one of the isolated bacteria on solid medium in all cases. The analysis of the time-to-result demonstrated a reduction in the turnaround time for identification ranging from 1 h 06 min to 24 h 44 min, when performing the blood culture direct identification in comparison with the conventional method, whatever the extraction method.
Krüger, S; Hüsken, L; Fornasari, R; Scainelli, I; Morlock, G E
2017-12-22
Quantitative effect-directed profiles of 77 industrially and freshly extracted botanicals like herbs, spices, vegetables and fruits, widely used as food ingredients, dietary supplements or traditional medicine, gave relevant information on their quality. It allows the assessment of food, dietary supplements and phytomedicines with regard to potential health-promoting activities. In contrary to sum parameter assays and targeted analysis, chromatography combined with effect-directed analysis allows fast assignment of single active compounds and evaluation of their contribution to the overall activity, originating from a food or botanical sample. High-performance thin-layer chromatography was hyphenated with UV/Vis/FLD detection and effect-directed analysis, using the 2,2-diphenyl-1-picrylhydrazyl radical, Gram-negative Aliivibrio fischeri, Gram-positive Bacillus subtilis, acetylcholinesterase and tyrosinase assays. Bioactive compounds of interest were eluted using an elution head-based interface and further characterized by electrospray ionization (high-resolution) mass spectrometry. This highly streamlined workflow resulted in a hyphenated HPTLC-UV/Vis/FLD-EDA-ESI + /ESI - -(HR)MS method. The excellent quantification power of the method was shown on three compounds. For rosmarinic acid, contents ranged from 4.5mg/g (rooibos) to 32.6mg/g (rosemary), for kaempferol-3-glucoside from 0.6mg/g (caraway) to 4.4mg/g (wine leaves), and for quercetin-3-glucoside from 1.1mg/g (hawthorn leaves) to 17.7mg/g (thyme). Three mean repeatabilities (%RSD) over 18 quantifications for the three compounds were ≤2.2% and the mean intermediate precision over three different days (%RSD, n=3) was 5.2%. Copyright © 2017 Elsevier B.V. All rights reserved.
An improved adaptive weighting function method for State Estimation in Power Systems with VSC-MTDC
NASA Astrophysics Data System (ADS)
Zhao, Kun; Yang, Xiaonan; Lang, Yansheng; Song, Xuri; Wang, Minkun; Luo, Yadi; Wu, Lingyun; Liu, Peng
2017-04-01
This paper presents an effective approach for state estimation in power systems that include multi-terminal voltage source converter based high voltage direct current (VSC-MTDC), called improved adaptive weighting function method. The proposed approach is simplified in which the VSC-MTDC system is solved followed by the AC system. Because the new state estimation method only changes the weight and keeps the matrix dimension unchanged. Accurate and fast convergence of AC/DC system can be realized by adaptive weight function method. This method also provides the technical support for the simulation analysis and accurate regulation of AC/DC system. Both the oretical analysis and numerical tests verify practicability, validity and convergence of new method.
Research progress in Asia on methods of processing laser-induced breakdown spectroscopy data
NASA Astrophysics Data System (ADS)
Guo, Yang-Min; Guo, Lian-Bo; Li, Jia-Ming; Liu, Hong-Di; Zhu, Zhi-Hao; Li, Xiang-You; Lu, Yong-Feng; Zeng, Xiao-Yan
2016-10-01
Laser-induced breakdown spectroscopy (LIBS) has attracted much attention in terms of both scientific research and industrial application. An important branch of LIBS research in Asia, the development of data processing methods for LIBS, is reviewed. First, the basic principle of LIBS and the characteristics of spectral data are briefly introduced. Next, two aspects of research on and problems with data processing methods are described: i) the basic principles of data preprocessing methods are elaborated in detail on the basis of the characteristics of spectral data; ii) the performance of data analysis methods in qualitative and quantitative analysis of LIBS is described. Finally, a direction for future development of data processing methods for LIBS is also proposed.
NASA Astrophysics Data System (ADS)
Li, Da; Cheung, Chifai; Zhao, Xing; Ren, Mingjun; Zhang, Juan; Zhou, Liqiu
2016-10-01
Autostereoscopy based three-dimensional (3D) digital reconstruction has been widely applied in the field of medical science, entertainment, design, industrial manufacture, precision measurement and many other areas. The 3D digital model of the target can be reconstructed based on the series of two-dimensional (2D) information acquired by the autostereoscopic system, which consists multiple lens and can provide information of the target from multiple angles. This paper presents a generalized and precise autostereoscopic three-dimensional (3D) digital reconstruction method based on Direct Extraction of Disparity Information (DEDI) which can be used to any transform autostereoscopic systems and provides accurate 3D reconstruction results through error elimination process based on statistical analysis. The feasibility of DEDI method has been successfully verified through a series of optical 3D digital reconstruction experiments on different autostereoscopic systems which is highly efficient to perform the direct full 3D digital model construction based on tomography-like operation upon every depth plane with the exclusion of the defocused information. With the absolute focused information processed by DEDI method, the 3D digital model of the target can be directly and precisely formed along the axial direction with the depth information.
Directional dual-tree complex wavelet packet transforms for processing quadrature signals.
Serbes, Gorkem; Gulcur, Halil Ozcan; Aydin, Nizamettin
2016-03-01
Quadrature signals containing in-phase and quadrature-phase components are used in many signal processing applications in every field of science and engineering. Specifically, Doppler ultrasound systems used to evaluate cardiovascular disorders noninvasively also result in quadrature format signals. In order to obtain directional blood flow information, the quadrature outputs have to be preprocessed using methods such as asymmetrical and symmetrical phasing filter techniques. These resultant directional signals can be employed in order to detect asymptomatic embolic signals caused by small emboli, which are indicators of a possible future stroke, in the cerebral circulation. Various transform-based methods such as Fourier and wavelet were frequently used in processing embolic signals. However, most of the times, the Fourier and discrete wavelet transforms are not appropriate for the analysis of embolic signals due to their non-stationary time-frequency behavior. Alternatively, discrete wavelet packet transform can perform an adaptive decomposition of the time-frequency axis. In this study, directional discrete wavelet packet transforms, which have the ability to map directional information while processing quadrature signals and have less computational complexity than the existing wavelet packet-based methods, are introduced. The performances of proposed methods are examined in detail by using single-frequency, synthetic narrow-band, and embolic quadrature signals.
Silva, Arlene S; Brandao, Geovani C; Matos, Geraldo D; Ferreira, Sergio L C
2015-11-01
The present work proposed an analytical method for the direct determination of chromium in infant formulas employing the high-resolution continuum source electrothermal atomic absorption spectrometry combined with the solid sample analysis (SS-HR-CS ET AAS). Sample masses up to 2.0mg were directly weighted on a solid sampling platform and introduced into the graphite tube. In order to minimize the formation of carbonaceous residues and to improve the contact of the modifier solution with the solid sample, a volume of 10 µL of a solution containing 6% (v/v) H2O2, 20% (v/v) ethanol and 1% (v/v) HNO3 was added. The pyrolysis and atomization temperatures established were 1600 and 2400 °C, respectively, using magnesium as chemical modifier. The calibration technique was evaluated by comparing the slopes of calibration curves established using aqueous and solid standards. This test revealed that chromium can be determined employing the external calibration technique using aqueous standards. Under these conditions, the method developed allows the direct determination of chromium with limit of quantification of 11.5 ng g(-1), precision expressed as relative standard deviation (RSD) in the range of 4.0-17.9% (n=3) and a characteristic mass of 1.2 pg of chromium. The accuracy was confirmed by analysis of a certified reference material of tomato leaves furnished by National Institute of Standards and Technology. The method proposed was applied for the determination of chromium in five different infant formula samples. The chromium content found varied in the range of 33.9-58.1 ng g(-1) (n=3). These samples were also analyzed employing ICP-MS. A statistical test demonstrated that there is no significant difference between the results found by two methods. The chromium concentrations achieved are lower than the maximum limit permissible for chromium in foods by Brazilian Legislation. Copyright © 2015. Published by Elsevier B.V.
Mazivila, Sarmento Júnior
2018-04-01
Discrimination of biodiesel feedstock present in diesel-biodiesel blend is challenging due to the great similarity in the spectral profile as well as digital image profile of each type of feedstock employed in biodiesel production. Once the marketed diesel-biodiesel blend is subsidized, in which motivates adulteration in biofuel blend by cheaper supplies with high solubility to obtain profits associated with the subsidies involved in biodiesel production. Non-destructive analytical methods based on qualitative and quantitative analysis for detecting marketed diesel-biodiesel blend adulteration are reviewed. Therefore, at the end is discussed the advantage of the qualitative analysis over quantitative analysis, when the systems require immediate decisions such as to know if the marketed diesel-biodiesel blend is unadulterated or adulterated in order to aid the analyst in selecting the most appropriate green analytical procedure for detecting diesel-biodiesel blend adulteration proceeding in fast way. This critical review provides a brief review on the non-destructive analytical methods reported in scientific literature based on different first-order multivariate calibration models coupled with spectroscopy data and digital image data to identify the type of biodiesel feedstock present in diesel-biodiesel blend in order to meets the strategies adopted by European Commission Directive 2012/0288/EC as well as to monitoring diesel-biodiesel adulteration. According to that Directive, from 2020 biodiesel produced from first-generation feedstock, that is, oils employed in human food such as sunflower, soybean, rapeseed, palm oil, among other oils should not be subsidized. Therefore, those non-destructive analytical methods here reviewed are helpful for discrimination of biodiesel feedstock present in diesel-biodiesel blend according to European Commission Directive 2012/0288/EC as well as for detecting diesel-biodiesel blend adulteration. Copyright © 2017 Elsevier B.V. All rights reserved.
Covariance analysis for evaluating head trackers
NASA Astrophysics Data System (ADS)
Kang, Donghoon
2017-10-01
Existing methods for evaluating the performance of head trackers usually rely on publicly available face databases, which contain facial images and the ground truths of their corresponding head orientations. However, most of the existing publicly available face databases are constructed by assuming that a frontal head orientation can be determined by compelling the person under examination to look straight ahead at the camera on the first video frame. Since nobody can accurately direct one's head toward the camera, this assumption may be unrealistic. Rather than obtaining estimation errors, we present a method for computing the covariance of estimation error rotations to evaluate the reliability of head trackers. As an uncertainty measure of estimators, the Schatten 2-norm of a square root of error covariance (or the algebraic average of relative error angles) can be used. The merit of the proposed method is that it does not disturb the person under examination by asking him to direct his head toward certain directions. Experimental results using real data validate the usefulness of our method.
CS_TOTR: A new vertex centrality method for directed signed networks based on status theory
NASA Astrophysics Data System (ADS)
Ma, Yue; Liu, Min; Zhang, Peng; Qi, Xingqin
Measuring the importance (or centrality) of vertices in a network is a significant topic in complex network analysis, which has significant applications in diverse domains, for example, disease control, spread of rumors, viral marketing and so on. Existing studies mainly focus on social networks with only positive (or friendship) relations, while signed networks with also negative (or enemy) relations are seldom studied. Various signed networks commonly exist in real world, e.g. a network indicating friendship/enmity, love/hate or trust/mistrust relationships. In this paper, we propose a new centrality method named CS_TOTR to give a ranking of vertices in directed signed networks. To design this new method, we use the “status theory” for signed networks, and also adopt the vertex ranking algorithm for a tournament and the topological sorting algorithm for a general directed graph. We apply this new centrality method on the famous Sampson Monastery dataset and obtain a convincing result which shows its validity.
Effect decomposition in the presence of an exposure-induced mediator-outcome confounder
VanderWeele, Tyler J.; Vansteelandt, Stijn; Robins, James M.
2014-01-01
Methods from causal mediation analysis have generalized the traditional approach to direct and indirect effects in the epidemiologic and social science literature by allowing for interaction and non-linearities. However, the methods from the causal inference literature have themselves been subject to a major limitation in that the so-called natural direct and indirect effects that are employed are not identified from data whenever there is a variable that is affected by the exposure, which also confounds the relationship between the mediator and the outcome. In this paper we describe three alternative approaches to effect decomposition that give quantities that can be interpreted as direct and indirect effects, and that can be identified from data even in the presence of an exposure-induced mediator-outcome confounder. We describe a simple weighting-based estimation method for each of these three approaches, illustrated with data from perinatal epidemiology. The methods described here can shed insight into pathways and questions of mediation even when an exposure-induced mediator-outcome confounder is present. PMID:24487213
NASA Technical Reports Server (NTRS)
Hou, Gene
2004-01-01
The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.
Particle Streak Anemometry: A New Method for Proximal Flow Sensing from Aircraft
NASA Astrophysics Data System (ADS)
Nichols, T. W.
Accurate sensing of relative air flow direction from fixed-wing small unmanned aircraft (sUAS) is challenging with existing multi-hole pitot-static and vane systems. Sub-degree direction accuracy is generally not available on such systems and disturbances to the local flow field, induced by the airframe, introduce an additional error source. An optical imaging approach to make a relative air velocity measurement with high-directional accuracy is presented. Optical methods offer the capability to make a proximal measurement in undisturbed air outside of the local flow field without the need to place sensors on vulnerable probes extended ahead of the aircraft. Current imaging flow analysis techniques for laboratory use rely on relatively thin imaged volumes and sophisticated hardware and intensity thresholding in low-background conditions. A new method is derived and assessed using a particle streak imaging technique that can be implemented with low-cost commercial cameras and illumination systems, and can function in imaged volumes of arbitrary depth with complex background signal. The new technique, referred to as particle streak anemometry (PSA) (to differentiate from particle streak velocimetry which makes a field measurement rather than a single bulk flow measurement) utilizes a modified Canny Edge detection algorithm with a connected component analysis and principle component analysis to detect streak ends in complex imaging conditions. A linear solution for the air velocity direction is then implemented with a random sample consensus (RANSAC) solution approach. A single DOF non-linear, non-convex optimization problem is then solved for the air speed through an iterative approach. The technique was tested through simulation and wind tunnel tests yielding angular accuracies under 0.2 degrees, superior to the performance of existing commercial systems. Air speed error standard deviations varied from 1.6 to 2.2 m/s depending on the techniques of implementation. While air speed sensing is secondary to accurate flow direction measurement, the air speed results were in line with commercial pitot static systems at low speeds.
Kusano, Miyako; Iizuka, Yumiko; Kobayashi, Makoto; Fukushima, Atsushi; Saito, Kazuki
2013-01-01
Plants produce various volatile organic compounds (VOCs), which are thought to be a crucial factor in their interactions with harmful insects, plants and animals. Composition of VOCs may differ when plants are grown under different nutrient conditions, i.e., macronutrient-deficient conditions. However, in plants, relationships between macronutrient assimilation and VOC composition remain unclear. In order to identify the kinds of VOCs that can be emitted when plants are grown under various environmental conditions, we established a conventional method for VOC profiling in Arabidopsis thaliana (Arabidopsis) involving headspace-solid-phase microextraction-gas chromatography-time-of-flight-mass spectrometry (HS-SPME-GC-TOF-MS). We grew Arabidopsis seedlings in an HS vial to directly perform HS analysis. To maximize the analytical performance of VOCs, we optimized the extraction method and the analytical conditions of HP-SPME-GC-TOF-MS. Using the optimized method, we conducted VOC profiling of Arabidopsis seedlings, which were grown under two different nutrition conditions, nutrition-rich and nutrition-deficient conditions. The VOC profiles clearly showed a distinct pattern with respect to each condition. This study suggests that HS-SPME-GC-TOF-MS analysis has immense potential to detect changes in the levels of VOCs in not only Arabidopsis, but other plants grown under various environmental conditions. PMID:24957989
Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.
Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué
2018-02-15
We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.
Rainville, Paul D; Simeone, Jennifer L; Root, Dan S; Mallet, Claude R; Wilson, Ian D; Plumb, Robert S
2015-03-21
The emergence of micro sampling techniques holds great potential to improve pharmacokinetic data quality, reduce animal usage, and save costs in safety assessment studies. The analysis of these samples presents new challenges for bioanalytical scientists, both in terms of sample processing and analytical sensitivity. The use of two dimensional LC/MS with, at-column-dilution for the direct analysis of highly organic extracts prepared from biological fluids such as dried blood spots and plasma is demonstrated. This technique negated the need to dry down and reconstitute, or dilute samples with water/aqueous buffer solutions, prior to injection onto a reversed-phase LC system. A mixture of model drugs, including bromhexine, triprolidine, enrofloxacin, and procaine were used to test the feasibility of the method. Finally an LC/MS assay for the probe pharmaceutical rosuvastatin was developed from dried blood spots and protein-precipitated plasma. The assays showed acceptable recovery, accuracy and precision according to US FDA guidelines. The resulting analytical method showed an increase in assay sensitivity of up to forty fold as compared to conventional methods by maximizing the amount loaded onto the system and the MS response for the probe pharmaceutical rosuvastatin from small volume samples.
Multivariate Analysis of Longitudinal Rates of Change
Bryan, Matthew; Heagerty, Patrick J.
2016-01-01
Longitudinal data allow direct comparison of the change in patient outcomes associated with treatment or exposure. Frequently, several longitudinal measures are collected that either reflect a common underlying health status, or characterize processes that are influenced in a similar way by covariates such as exposure or demographic characteristics. Statistical methods that can combine multivariate response variables into common measures of covariate effects have been proposed by Roy and Lin [1]; Proust-Lima, Letenneur and Jacqmin-Gadda [2]; and Gray and Brookmeyer [3] among others. Current methods for characterizing the relationship between covariates and the rate of change in multivariate outcomes are limited to select models. For example, Gray and Brookmeyer [3] introduce an “accelerated time” method which assumes that covariates rescale time in longitudinal models for disease progression. In this manuscript we detail an alternative multivariate model formulation that directly structures longitudinal rates of change, and that permits a common covariate effect across multiple outcomes. We detail maximum likelihood estimation for a multivariate longitudinal mixed model. We show via asymptotic calculations the potential gain in power that may be achieved with a common analysis of multiple outcomes. We apply the proposed methods to the analysis of a trivariate outcome for infant growth and compare rates of change for HIV infected and uninfected infants. PMID:27417129
ERIC Educational Resources Information Center
LaSota, Robin Rae
2013-01-01
My dissertation utilizes an explanatory, sequential mixed-methods research design to assess factors influencing community college students' transfer probability to baccalaureate-granting institutions and to present promising practices in colleges and states directed at improving upward transfer, particularly for low-income and first-generation…
Transient analysis of an adaptive system for optimization of design parameters
NASA Technical Reports Server (NTRS)
Bayard, D. S.
1992-01-01
Averaging methods are applied to analyzing and optimizing the transient response associated with the direct adaptive control of an oscillatory second-order minimum-phase system. The analytical design methods developed for a second-order plant can be applied with some approximation to a MIMO flexible structure having a single dominant mode.
Wang, Shunhai; Liu, Anita P; Yan, Yuetian; Daly, Thomas J; Li, Ning
2018-05-30
Traditional SDS-PAGE method and its modern equivalent CE-SDS method are both widely applied to assess the purity of therapeutic monoclonal antibody (mAb) drug products. However, structural identification of low molecular weight (LMW) impurities using those methods has been challenging and largely based on empirical knowledges. In this paper, we present that hydrophilic interaction chromatography (HILIC) coupled with mass spectrometry analysis is a novel and orthogonal method to characterize such LMW impurities present within a purified mAb drug product sample. We show here that after removal of N-linked glycans, the HILIC method separates mAb-related LMW impurities with a size-based elution order. The subsequent mass measurement from a high-resolution accurate mass spectrometer provides direct and unambiguous identification of a variety of low-abundance LMW impurities within a single LC-MS analysis. Free light chain, half antibody, H2L species (antibody possessing a single light chain) and protein backbone-truncated species can all be confidently identified and elucidated in great detail, including the truncation sites and associated post-translational modifications. It is worth noting that this study provides the first example where the H2L species can be directly detected in a mAb drug product sample by intact mass analysis without prior enrichment. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Directional spatial frequency analysis of lipid distribution in atherosclerotic plaque
NASA Astrophysics Data System (ADS)
Korn, Clyde; Reese, Eric; Shi, Lingyan; Alfano, Robert; Russell, Stewart
2016-04-01
Atherosclerosis is characterized by the growth of fibrous plaques due to the retention of cholesterol and lipids within the artery wall, which can lead to vessel occlusion and cardiac events. One way to evaluate arterial disease is to quantify the amount of lipid present in these plaques, since a higher disease burden is characterized by a higher concentration of lipid. Although therapeutic stimulation of reverse cholesterol transport to reduce cholesterol deposits in plaque has not produced significant results, this may be due to current image analysis methods which use averaging techniques to calculate the total amount of lipid in the plaque without regard to spatial distribution, thereby discarding information that may have significance in marking response to therapy. Here we use Directional Fourier Spatial Frequency (DFSF) analysis to generate a characteristic spatial frequency spectrum for atherosclerotic plaques from C57 Black 6 mice both treated and untreated with a cholesterol scavenging nanoparticle. We then use the Cauchy product of these spectra to classify the images with a support vector machine (SVM). Our results indicate that treated plaque can be distinguished from untreated plaque using this method, where no difference is seen using the spatial averaging method. This work has the potential to increase the effectiveness of current in-vivo methods of plaque detection that also use averaging methods, such as laser speckle imaging and Raman spectroscopy.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
NASA Astrophysics Data System (ADS)
Shi, W.; Liu, Y.; Shi, X.
2017-12-01
Critical transitions of farming-pastoral ecotone (FPE) boundaries can be affected by climate change and human activities, yet current studies have not adequately analyzed the spatially explicit contributions of climate change to FPE boundary shifts, particularly those in different regions and periods. In this study, we present a series of analyses at the point (gravity center analysis), line (boundary shifts detected using two methods) and area (spatial analysis) levels to quantify climate contributions at the 1 km scale in each ecological functional region during three study periods from the 1970s to the 2000s using climate and land use data. Both gravity center analysis and boundary shift detection reveal similar spatial patterns, with more extensive boundary shifts in the northeastern and southeastern parts of the FPE in northern China, especially during the 1970s-1980s and 1990s-2000s. Climate contributions in the X- and Y-coordinate directions and in the directions of transects along boundaries show that significant differences in climate contributions to FPE boundary shifts exist in different ecological regions during the three periods. Additionally, the results in different directions exhibit good agreement in most of the ecological functional regions during most of the periods. However, the contribution values in the directions of transects along the boundaries (with 1-17%) were always smaller than those in the X-and Y-coordinate directions (4-56%), which suggests that the analysis in the transect directions is more stable and reasonable. Thus, this approach provides an alternative method for detecting the climate contributions to boundary shifts associated with land use changes. Spatial analysis of the relationship between climate change and land use change in the context of FPE boundary shifts in northern China provides further evidence and explanation of the driving forces of climate change. Our findings suggest that an improved understanding of the quantitative contributions of climate change to the formation and transition of the FPE in northern China is essential for addressing current and future adaptation and mitigation measures and regional land use management.
NASA Astrophysics Data System (ADS)
Hui, Wei-Hua; Bao, Fu-Ting; Wei, Xiang-Geng; Liu, Yang
2015-12-01
In this paper, a new measuring method of ablation rate was proposed based on X-ray three-dimensional (3D) reconstruction. The ablation of 4-direction carbon/carbon composite nozzles was investigated in the combustion environment of a solid rocket motor, and the macroscopic ablation and linear recession rate were studied through the X-ray 3D reconstruction method. The results showed that the maximum relative error of the X-ray 3D reconstruction was 0.0576%, which met the minimum accuracy of the ablation analysis; along the nozzle axial direction, from convergence segment, throat to expansion segment, the ablation gradually weakened; in terms of defect ablation, the middle ablation was weak, while the ablation in both sides was more serious. In a word, the proposed reconstruction method based on X-ray about C/C nozzle ablation can construct a clear model of ablative nozzle which characterizes the details about micro-cracks, deposition, pores and surface to analyze ablation, so that this method can create the ablation curve in any surface clearly.
Chericoni, Silvio; Stefanelli, Fabio; Da Valle, Ylenia; Giusiani, Mario
2015-09-01
A sensitive and reliable method for extraction and quantification of benzoylecgonine (BZE) and cocaine (COC) in urine is presented. Propyl-chloroformate was used as derivatizing agent, and it was directly added to the urine sample: the propyl derivative and COC were then recovered by liquid-liquid extraction procedure. Gas chromatography-mass spectrometry was used to detect the analytes in selected ion monitoring mode. The method proved to be precise for BZE and COC both in term of intraday and interday analysis, with a coefficient of variation (CV)<6%. Limits of detection (LOD) were 2.7 ng/mL for BZE and 1.4 ng/mL for COC. The calibration curve showed a linear relationship for BZE and COC (r2>0.999 and >0.997, respectively) within the range investigated. The method, applied to thirty authentic samples, showed to be very simple, fast, and reliable, so it can be easily applied in routine analysis for the quantification of BZE and COC in urine samples. © 2015 American Academy of Forensic Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Haitao, E-mail: liaoht@cae.ac.cn
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less
Direct system parameter identification of mechanical structures with application to modal analysis
NASA Technical Reports Server (NTRS)
Leuridan, J. M.; Brown, D. L.; Allemang, R. J.
1982-01-01
In this paper a method is described to estimate mechanical structure characteristics in terms of mass, stiffness and damping matrices using measured force input and response data. The estimated matrices can be used to calculate a consistent set of damped natural frequencies and damping values, mode shapes and modal scale factors for the structure. The proposed technique is attractive as an experimental modal analysis method since the estimation of the matrices does not require previous estimation of frequency responses and since the method can be used, without any additional complications, for multiple force input structure testing.
Sample preparation of metal alloys by electric discharge machining
NASA Technical Reports Server (NTRS)
Chapman, G. B., II; Gordon, W. A.
1976-01-01
Electric discharge machining was investigated as a noncontaminating method of comminuting alloys for subsequent chemical analysis. Particulate dispersions in water were produced from bulk alloys at a rate of about 5 mg/min by using a commercially available machining instrument. The utility of this approach was demonstrated by results obtained when acidified dispersions were substituted for true acid solutions in an established spectrochemical method. The analysis results were not significantly different for the two sample forms. Particle size measurements and preliminary results from other spectrochemical methods which require direct aspiration of liquid into flame or plasma sources are reported.
Gagnon, Marie-Pierre; Gagnon, Johanne; Desmartis, Marie; Njoya, Merlin
2013-01-01
This study aimed to assess the effectiveness of a blended-teaching intervention using Internet-based tutorials coupled with traditional lectures in an introduction to research undergraduate nursing course. Effects of the intervention were compared with conventional, face-to-face classroom teaching on three outcomes: knowledge, satisfaction, and self-learning readiness. A two-group, randomized, controlled design was used, involving 112 participants. Descriptive statistics and analysis of covariance (ANCOVA) were performed. The teaching method was found to have no direct impact on knowledge acquisition, satisfaction, and self-learning readiness. However, motivation and teaching method had an interaction effect on knowledge acquisition by students. Among less motivated students, those in the intervention group performed better than those who received traditional training. These findings suggest that this blended-teaching method could better suit some students, depending on their degree of motivation and level of self-directed learning readiness.
[Spatial distribution pattern of Chilo suppressalis analyzed by classical method and geostatistics].
Yuan, Zheming; Fu, Wei; Li, Fangyi
2004-04-01
Two original samples of Chilo suppressalis and their grid, random and sequence samples were analyzed by classical method and geostatistics to characterize the spatial distribution pattern of C. suppressalis. The limitations of spatial distribution analysis with classical method, especially influenced by the original position of grid, were summarized rather completely. On the contrary, geostatistics characterized well the spatial distribution pattern, congregation intensity and spatial heterogeneity of C. suppressalis. According to geostatistics, the population was up to Poisson distribution in low density. As for higher density population, its distribution was up to aggregative, and the aggregation intensity and dependence range were 0.1056 and 193 cm, respectively. Spatial heterogeneity was also found in the higher density population. Its spatial correlativity in line direction was more closely than that in row direction, and the dependence ranges in line and row direction were 115 and 264 cm, respectively.
A variation of the Davis-Smith method for in-flight determination of spacecraft magnetic fields.
NASA Technical Reports Server (NTRS)
Belcher, J. W.
1973-01-01
A variation of a procedure developed by Davis and Smith (1968) is presented for the in-flight determination of spacecraft magnetic fields. Both methods take statistical advantage of the observation that fluctuations in the interplanetary magnetic field over short periods of time are primarily changes in direction rather than in magnitude. During typical solar wind conditions between 0.8 and 1.0 AU, a statistical analysis of 2-3 days of continuous interplanetary field measurements yields an estimate of a constant spacecraft field with an uncertainty of plus or minus 0.25 gamma in the direction radial to the sun and plus or minus 15 gammas in the directions transverse to the radial. The method is also of use in estimating variable spacecraft fields with gradients of the order of 0.1 gamma/day and less and in other special circumstances.
Direct bonded HOPG - Analyzer support without background source
NASA Astrophysics Data System (ADS)
Groitl, Felix; Kitaura, Hidetoshi; Nishiki, Naomi; Rønnow, Henrik M.
2018-04-01
A new production process allows a direct bonding of HOPG crystals on Si wafers. This new method facilitates the production of analyzer crystals with support structure without the use of additional, background inducing fixation material, e.g. glue, wax and screws. This new method is especially interesting for the upcoming generation of CAMEA-type multiplexing spectrometers. These instruments allow for a drastic performance increase due to the increased angular coverage and multiple energy analysis. Exploiting the transparency of multiple HOPG for cold neutrons, a consecutive arrangement of HOPG analyzer crystals per Q-channel can be achieved. This implies that neutrons travel through up to 10 arrays of analyzer crystals before reaching the analyzer corresponding to their energy. Hence, a careful choice of the fixation method for the analyzer crystals in regards to transparency and background is necessary. Here, we present first results on the diffraction and mechanical performance of direct bonded analyzer crystals.
Guo, Xiangyu; Bai, Hua; Lv, Yueguang; Xi, Guangcheng; Li, Junfang; Ma, Xiaoxiao; Ren, Yue; Ouyang, Zheng; Ma, Qiang
2018-04-01
Rapid, on-site analysis was achieved through significantly simplified operation procedures for a wide variety of toy samples (crayon, temporary tattoo sticker, finger paint, modeling clay, and bubble solution) using a miniature mass spectrometry system with ambient ionization capability. The labor-intensive analytical protocols involving sample workup and chemical separation, traditionally required for MS-based analysis, were replaced by direct sampling analysis using ambient ionization methods. A Mini β ion trap miniature mass spectrometer was coupled with versatile ambient ionization methods, e.g. paper spray, extraction spray and slug-flow microextraction nanoESI for direct identification of prohibited colorants, carcinogenic primary aromatic amines, allergenic fragrances, preservatives and plasticizers from raw toy samples. The use of paper substrates coated with Co 3 O 4 nanoparticles allowed a great increase in sensitivity for paper spray. Limits of detection as low as 5μgkg -1 were obtained for target analytes. The methods being developed based on the integration of ambient ionization with miniature mass spectrometer represent alternatives to current in-lab MS analysis operation, and would enable fast, outside-the-lab screening of toy products to ensure children's safety and health. Copyright © 2017 Elsevier B.V. All rights reserved.
Fast determination of the current loss mechanisms in textured crystalline Si-based solar cells
NASA Astrophysics Data System (ADS)
Nakane, Akihiro; Fujimoto, Shohei; Fujiwara, Hiroyuki
2017-11-01
A quite general device analysis method that allows the direct evaluation of optical and recombination losses in crystalline silicon (c-Si)-based solar cells has been developed. By applying this technique, the current loss mechanisms of the state-of-the-art solar cells with ˜20% efficiencies have been revealed. In the established method, the optical and electrical losses are characterized from the analysis of an experimental external quantum efficiency (EQE) spectrum with very low computational cost. In particular, we have performed the EQE analyses of textured c-Si solar cells by employing the experimental reflectance spectra obtained directly from the actual devices while using flat optical models without any fitting parameters. We find that the developed method provides almost perfect fitting to EQE spectra reported for various textured c-Si solar cells, including c-Si heterojunction solar cells, a dopant-free c-Si solar cell with a MoOx layer, and an n-type passivated emitter with rear locally diffused solar cell. The modeling of the recombination loss further allows the extraction of the minority carrier diffusion length and surface recombination velocity from the EQE analysis. Based on the EQE analysis results, the current loss mechanisms in different types of c-Si solar cells are discussed.
Krishnan, Prakash; Tarricone, Arthur; K-Raman, Purushothaman; Majeed, Farhan; Kapur, Vishal; Gujja, Karthik; Wiley, Jose; Vasquez, Miguel; Lascano, Rheoneil A.; Quiles, Katherine G.; Distin, Tashanne; Fontenelle, Ran; Atallah-Lajam, Farah; Kini, Annapoorna; Sharma, Samin
2017-01-01
Background: The aim of this study was to compare 1-year outcomes for patients with femoropopliteal in-stent restenosis using directional atherectomy guided by intravascular ultrasound (IVUS) versus directional atherectomy guided by angiography. Methods and results: This was a retrospective analysis for patients with femoropopliteal in-stent restenosis treated with IVUS-guided directional atherectomy versus directional atherectomy guided by angiography from a single center between March 2012 and February 2016. Clinically driven target lesion revascularization was the primary endpoint and was evaluated through medical chart review as well as phone call follow up. Conclusions: Directional atherectomy guided by IVUS reduces clinically driven target lesion revascularization for patients with femoropopliteal in-stent restenosis. PMID:29265002
Craig, J C; Waller, C W; Billets, S; Elsohly, M A
1978-04-01
Methods are presented for the direct GLC analysis of the catechol C15 alkenyl side-chain congeners contained in the urushiol fraction of poison ivy (Toxicodendron radicans) and the C17 homologs of poison oak (Toxicodendron diversilobum). A number of liquid phases were investigated and demonstrated varying degrees of separation. The methods developed were applied to the analysis of the urushiol fractions obtained from different plant parts of poison ivy. The effects of extraction before and after drying demonstrated tht a larger percentage of urushiol was obtained when the fresh plant material was extracted with ethanol.
Sociocultural Meanings of Nanotechnology: Research Methodologies
NASA Astrophysics Data System (ADS)
Bainbridge, William Sims
2004-06-01
This article identifies six social-science research methodologies that will be useful for charting the sociocultural meaning of nanotechnology: web-based questionnaires, vignette experiments, analysis of web linkages, recommender systems, quantitative content analysis, and qualitative textual analysis. Data from a range of sources are used to illustrate how the methods can delineate the intellectual content and institutional structure of the emerging nanotechnology culture. Such methods will make it possible in future to test hypotheses such as that there are two competing definitions of nanotechnology - the technical-scientific and the science-fiction - that are influencing public perceptions by different routes and in different directions.
Method and Apparatus for Concentrating Vapors for Analysis
Grate, Jay W.; Baldwin, David L.; Anheier, Jr., Norman C.
2008-10-07
An apparatus and method are disclosed for pre-concentrating gaseous vapors for analysis. The invention finds application in conjunction with, e.g., analytical instruments where low detection limits for gaseous vapors are desirable. Vapors sorbed and concentrated within the bed of the apparatus can be thermally desorbed achieving at least partial separation of vapor mixtures. The apparatus is suitable, e.g., for preconcentration and sample injection, and provides greater resolution of peaks for vapors within vapor mixtures, yielding detection levels that are 10-10,000 times better than for direct sampling and analysis systems. Features are particularly useful for continuous unattended monitoring applications.
MEG-SIM: a web portal for testing MEG analysis methods using realistic simulated and empirical data.
Aine, C J; Sanfratello, L; Ranken, D; Best, E; MacArthur, J A; Wallace, T; Gilliam, K; Donahue, C H; Montaño, R; Bryant, J E; Scott, A; Stephen, J M
2012-04-01
MEG and EEG measure electrophysiological activity in the brain with exquisite temporal resolution. Because of this unique strength relative to noninvasive hemodynamic-based measures (fMRI, PET), the complementary nature of hemodynamic and electrophysiological techniques is becoming more widely recognized (e.g., Human Connectome Project). However, the available analysis methods for solving the inverse problem for MEG and EEG have not been compared and standardized to the extent that they have for fMRI/PET. A number of factors, including the non-uniqueness of the solution to the inverse problem for MEG/EEG, have led to multiple analysis techniques which have not been tested on consistent datasets, making direct comparisons of techniques challenging (or impossible). Since each of the methods is known to have their own set of strengths and weaknesses, it would be beneficial to quantify them. Toward this end, we are announcing the establishment of a website containing an extensive series of realistic simulated data for testing purposes ( http://cobre.mrn.org/megsim/ ). Here, we present: 1) a brief overview of the basic types of inverse procedures; 2) the rationale and description of the testbed created; and 3) cases emphasizing functional connectivity (e.g., oscillatory activity) suitable for a wide assortment of analyses including independent component analysis (ICA), Granger Causality/Directed transfer function, and single-trial analysis.
MEG-SIM: A Web Portal for Testing MEG Analysis Methods using Realistic Simulated and Empirical Data
Aine, C. J.; Sanfratello, L.; Ranken, D.; Best, E.; MacArthur, J. A.; Wallace, T.; Gilliam, K.; Donahue, C. H.; Montaño, R.; Bryant, J. E.; Scott, A.; Stephen, J. M.
2012-01-01
MEG and EEG measure electrophysiological activity in the brain with exquisite temporal resolution. Because of this unique strength relative to noninvasive hemodynamic-based measures (fMRI, PET), the complementary nature of hemodynamic and electrophysiological techniques is becoming more widely recognized (e.g., Human Connectome Project). However, the available analysis methods for solving the inverse problem for MEG and EEG have not been compared and standardized to the extent that they have for fMRI/PET. A number of factors, including the non-uniqueness of the solution to the inverse problem for MEG/EEG, have led to multiple analysis techniques which have not been tested on consistent datasets, making direct comparisons of techniques challenging (or impossible). Since each of the methods is known to have their own set of strengths and weaknesses, it would be beneficial to quantify them. Toward this end, we are announcing the establishment of a website containing an extensive series of realistic simulated data for testing purposes (http://cobre.mrn.org/megsim/). Here, we present: 1) a brief overview of the basic types of inverse procedures; 2) the rationale and description of the testbed created; and 3) cases emphasizing functional connectivity (e.g., oscillatory activity) suitable for a wide assortment of analyses including independent component analysis (ICA), Granger Causality/Directed transfer function, and single-trial analysis. PMID:22068921
Lattice Boltzmann methods for global linear instability analysis
NASA Astrophysics Data System (ADS)
Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis
2017-12-01
Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.
Determination of Diethyl Phthalate and Polyhexamethylene Guanidine in Surrogate Alcohol from Russia
Monakhova, Yulia B.; Kuballa, Thomas; Leitz, Jenny; Lachenmeier, Dirk W.
2011-01-01
Analytical methods based on spectroscopic techniques were developed and validated for the determination of diethyl phthalate (DEP) and polyhexamethylene guanidine (PHMG), which may occur in unrecorded alcohol. Analysis for PHMG was based on UV-VIS spectrophotometry after derivatization with Eosin Y and 1H NMR spectroscopy of the DMSO extract. Analysis of DEP was performed with direct UV-VIS and 1H NMR methods. Multivariate curve resolution and spectra computation methods were used to confirm the presence of PHMG and DEP in the investigated beverages. Of 22 analysed alcohol samples, two contained DEP or PHMG. 1H NMR analysis also revealed the presence of signals of hawthorn extract in three medicinal alcohols used as surrogate alcohol. The simple and cheap UV-VIS methods can be used for rapid screening of surrogate alcohol samples for impurities, while 1H NMR is recommended for specific confirmatory analysis if required. PMID:21647285
Determination of diethyl phthalate and polyhexamethylene guanidine in surrogate alcohol from Russia.
Monakhova, Yulia B; Kuballa, Thomas; Leitz, Jenny; Lachenmeier, Dirk W
2011-01-01
Analytical methods based on spectroscopic techniques were developed and validated for the determination of diethyl phthalate (DEP) and polyhexamethylene guanidine (PHMG), which may occur in unrecorded alcohol. Analysis for PHMG was based on UV-VIS spectrophotometry after derivatization with Eosin Y and (1)H NMR spectroscopy of the DMSO extract. Analysis of DEP was performed with direct UV-VIS and (1)H NMR methods. Multivariate curve resolution and spectra computation methods were used to confirm the presence of PHMG and DEP in the investigated beverages. Of 22 analysed alcohol samples, two contained DEP or PHMG. (1)H NMR analysis also revealed the presence of signals of hawthorn extract in three medicinal alcohols used as surrogate alcohol. The simple and cheap UV-VIS methods can be used for rapid screening of surrogate alcohol samples for impurities, while (1)H NMR is recommended for specific confirmatory analysis if required.