Selection of independent components based on cortical mapping of electromagnetic activity
NASA Astrophysics Data System (ADS)
Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen
2012-10-01
Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.
Evaluating the efficacy of fully automated approaches for the selection of eye blink ICA components
Pontifex, Matthew B.; Miskovic, Vladimir; Laszlo, Sarah
2017-01-01
Independent component analysis (ICA) offers a powerful approach for the isolation and removal of eye blink artifacts from EEG signals. Manual identification of the eye blink ICA component by inspection of scalp map projections, however, is prone to error, particularly when non-artifactual components exhibit topographic distributions similar to the blink. The aim of the present investigation was to determine the extent to which automated approaches for selecting eye blink related ICA components could be utilized to replace manual selection. We evaluated popular blink selection methods relying on spatial features [EyeCatch()], combined stereotypical spatial and temporal features [ADJUST()], and a novel method relying on time-series features alone [icablinkmetrics()] using both simulated and real EEG data. The results of this investigation suggest that all three methods of automatic component selection are able to accurately identify eye blink related ICA components at or above the level of trained human observers. However, icablinkmetrics(), in particular, appears to provide an effective means of automating ICA artifact rejection while at the same time eliminating human errors inevitable during manual component selection and false positive component identifications common in other automated approaches. Based upon these findings, best practices for 1) identifying artifactual components via automated means and 2) reducing the accidental removal of signal-related ICA components are discussed. PMID:28191627
Saad, Ahmed S; Abo-Talib, Nisreen F; El-Ghobashy, Mohamed R
2016-01-05
Different methods have been introduced to enhance selectivity of UV-spectrophotometry thus enabling accurate determination of co-formulated components, however mixtures whose components exhibit wide variation in absorptivities has been an obstacle against application of UV-spectrophotometry. The developed ratio difference at coabsorptive point method (RDC) represents a simple effective solution for the mentioned problem, where the additive property of light absorbance enabled the consideration of the two components as multiples of the lower absorptivity component at certain wavelength (coabsorptive point), at which their total concentration multiples could be determined, whereas the other component was selectively determined by applying the ratio difference method in a single step. Mixture of perindopril arginine (PA) and amlodipine besylate (AM) figures that problem, where the low absorptivity of PA relative to AM hinders selective spectrophotometric determination of PA. The developed method successfully determined both components in the overlapped region of their spectra with accuracy 99.39±1.60 and 100.51±1.21, for PA and AM, respectively. The method was validated as per the USP guidelines and showed no significant difference upon statistical comparison with reported chromatographic method. Copyright © 2015 Elsevier B.V. All rights reserved.
2011-01-01
Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy. PMID:22208882
Hoskinson, Reed L [Rigby, ID; Svoboda, John M [Idaho Falls, ID; Bauer, William F [Idaho Falls, ID; Elias, Gracy [Idaho Falls, ID
2008-05-06
A method and apparatus is provided for monitoring a flow path having plurality of different solid components flowing therethrough. For example, in the harvesting of a plant material, many factors surrounding the threshing, separating or cleaning of the plant material and may lead to the inadvertent inclusion of the component being selectively harvested with residual plant materials being discharged or otherwise processed. In accordance with the present invention the detection of the selectively harvested component within residual materials may include the monitoring of a flow path of such residual materials by, for example, directing an excitation signal toward of flow path of material and then detecting a signal initiated by the presence of the selectively harvested component responsive to the excitation signal. The detected signal may be used to determine the presence or absence of a selected plant component within the flow path of residual materials.
NASA Astrophysics Data System (ADS)
Feng, Ximeng; Li, Gang; Yu, Haixia; Wang, Shaohui; Yi, Xiaoqing; Lin, Ling
2018-03-01
Noninvasive blood component analysis by spectroscopy has been a hotspot in biomedical engineering in recent years. Dynamic spectrum provides an excellent idea for noninvasive blood component measurement, but studies have been limited to the application of broadband light sources and high-resolution spectroscopy instruments. In order to remove redundant information, a more effective wavelength selection method has been presented in this paper. In contrast to many common wavelength selection methods, this method is based on sensing mechanism which has a clear mechanism and can effectively avoid the noise from acquisition system. The spectral difference coefficient was theoretically proved to have a guiding significance for wavelength selection. After theoretical analysis, the multi-band spectral difference coefficient-wavelength selection method combining with the dynamic spectrum was proposed. An experimental analysis based on clinical trial data from 200 volunteers has been conducted to illustrate the effectiveness of this method. The extreme learning machine was used to develop the calibration models between the dynamic spectrum data and hemoglobin concentration. The experiment result shows that the prediction precision of hemoglobin concentration using multi-band spectral difference coefficient-wavelength selection method is higher compared with other methods.
NASA Astrophysics Data System (ADS)
Song, Yunquan; Lin, Lu; Jian, Ling
2016-07-01
Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.
Sinusoidal Analysis-Synthesis of Audio Using Perceptual Criteria
NASA Astrophysics Data System (ADS)
Painter, Ted; Spanias, Andreas
2003-12-01
This paper presents a new method for the selection of sinusoidal components for use in compact representations of narrowband audio. The method consists of ranking and selecting the most perceptually relevant sinusoids. The idea behind the method is to maximize the matching between the auditory excitation pattern associated with the original signal and the corresponding auditory excitation pattern associated with the modeled signal that is being represented by a small set of sinusoidal parameters. The proposed component-selection methodology is shown to outperform the maximum signal-to-mask ratio selection strategy in terms of subjective quality.
ERIC Educational Resources Information Center
Lipps, Leann E. T.
To investigate two measures which have been used to assess children's attention to stimulus dimensions, component selection, and dimension preference, both measures were administered to 38 3 1/2 to 5-year-olds and 20 5- to 6 1/2-year-olds. Seven to ten days after the dimension preference task was given. the component selection measure was…
Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters
NASA Astrophysics Data System (ADS)
Vasumathi, B.; Moorthi, S.
2011-11-01
In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.
Method and apparatus for selectively harvesting multiple components of a plant material
Hoskinson, Reed L.; Hess, Richard J.; Kenney, Kevin L.; Svoboda, John M.; Foust, Thomas D.
2004-05-04
A method and apparatus for selectively harvesting multiple components of a plant material. A grain component is separated from the plant material such as by processing the plant material through a primary threshing and separating mechanism. At least one additional component of the plant material is selectively harvested such as by subjecting the plant material to a secondary threshing and separating mechanism. For example, the stems of a plant material may be broken at a location adjacent one or more nodes thereof with the nodes and the internodal stem portions being subsequently separated for harvesting. The at least one additional component (e.g., the internodal stems) may then be consolidated and packaged for subsequent use or processing. The harvesting of the grain and of the at least one additional component may occur within a single harvesting machine, for example, during a single pass over a crop field.
[Use of adsorption methods for plasma component apheresis].
Bang, B; Heegaard, N H
1991-11-25
Plasma-apheresis is a nonspecific and wasteful intervention requiring the use of potentially infectious and expensive replacement fluids. Selective removal of the unwanted plasma component circumvents most of the problems. For selective binding and removal of plasma components adsorption methods based on the principles of affinity chromatography have been useful. The ideal adsorption column still does not exist, but the number of clinical applications is increasing. The results vary, but the treatment has been used with success in hypercholesterolemia, and in patients with hemophilia with antifactor antibodies and patients with antibodies directed towards HLA-antigens awaiting renal transplantation. In conclusion selective plasma component-apheresis is an improvement in some diseases as compared to conventional plasma-apheresis. The technique is still being improved but large clinical trials examining the effects of plasma-component-apheresis have not yet been published.
Method for treating a nuclear process off-gas stream
Pence, Dallas T.; Chou, Chun-Chao
1984-01-01
Disclosed is a method for selectively removing and recovering the noble gas and other gaseous components typically emitted during nuclear process operations. The method is adaptable and useful for treating dissolver off-gas effluents released during reprocessing of spent nuclear fuels whereby to permit radioactive contaminant recovery prior to releasing the remaining off-gases to the atmosphere. Briefly, the method sequentially comprises treating the off-gas stream to preliminarily remove NO.sub.x, hydrogen and carbon-containing organic compounds, and semivolatile fission product metal oxide components therefrom; adsorbing iodine components on silver-exchanged mordenite; removing water vapor carried by said stream by means of a molecular sieve; selectively removing the carbon dioxide components of said off-gas stream by means of a molecular sieve; selectively removing xenon in gas phase by passing said stream through a molecular sieve comprising silver-exchanged mordenite; selectively separating krypton from oxygen by means of a molecular sieve comprising silver-exchanged mordenite; selectively separating krypton from the bulk nitrogen stream using a molecular sieve comprising silver-exchanged mordenite cooled to about -140.degree. to -160.degree. C.; concentrating the desorbed krypton upon a molecular sieve comprising silver-exchange mordenite cooled to about -140.degree. to -160.degree. C.; and further cryogenically concentrating, and the recovering for storage, the desorbed krypton.
Variance Component Selection With Applications to Microbiome Taxonomic Data.
Zhai, Jing; Kim, Juhyun; Knox, Kenneth S; Twigg, Homer L; Zhou, Hua; Zhou, Jin J
2018-01-01
High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator) penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV) infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.
NASA Technical Reports Server (NTRS)
Rajagopal, K. R.
1992-01-01
The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.
NASA Technical Reports Server (NTRS)
1991-01-01
The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.
NASA Technical Reports Server (NTRS)
Spanos, John T.; Tsuha, Walter S.
1989-01-01
The assumed-modes method in multibody dynamics allows the elastic deformation of each component in the system to be approximated by a sum of products of spatial and temporal functions commonly known as modes and modal coordinates respectively. The choice of component modes used to model articulating and non-articulating flexible multibody systems is examined. Attention is directed toward three classical Component Mode Synthesis (CMS) methods whereby component normal modes are generated by treating the component interface (I/F) as either fixed, free, or loaded with mass and stiffness contributions from the remaining components. The fixed and free I/F normal modes are augmented by static shape functions termed constraint and residual modes respectively. A mode selection procedure is outlined whereby component modes are selected from the Craig-Bampton (fixed I/F plus constraint), MacNeal-Rubin (free I/F plus residual), or Benfield-Hruda (loaded I/F) mode sets in accordance with a modal ordering scheme derived from balance realization theory. The success of the approach is judged by comparing the actuator-to-sensor frequency response of the reduced order system with that of the full order system over the frequency range of interest. A finite element model of the Galileo spacecraft serves as an example in demonstrating the effectiveness of the proposed mode selection method.
VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS
Huang, Jian; Horowitz, Joel L.; Wei, Fengrong
2010-01-01
We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739
Method for Reducing Pumping Damage to Blood
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor); Akkerman, James W. (Inventor); Aber, Gregory S. (Inventor); VanDamm, George Arthur (Inventor); Bacak, James W. (Inventor); Svejkovsky, Robert J. (Inventor); Benkowski, Robert J. (Inventor)
1997-01-01
Methods are provided for minimizing damage to blood in a blood pump wherein the blood pump comprises a plurality of pump components that may affect blood damage such as clearance between pump blades and housing, number of impeller blades, rounded or flat blade edges, variations in entrance angles of blades, impeller length, and the like. The process comprises selecting a plurality of pump components believed to affect blood damage such as those listed herein before. Construction variations for each of the plurality of pump components are then selected. The pump components and variations are preferably listed in a matrix for easy visual comparison of test results. Blood is circulated through a pump configuration to test each variation of each pump component. After each test, total blood damage is determined for the blood pump. Preferably each pump component variation is tested at least three times to provide statistical results and check consistency of results. The least hemolytic variation for each pump component is preferably selected as an optimized component. If no statistical difference as to blood damage is produced for a variation of a pump component, then the variation that provides preferred hydrodynamic performance is selected. To compare the variation of pump components such as impeller and stator blade geometries, the preferred embodiment of the invention uses a stereolithography technique for realizing complex shapes within a short time period.
NASA Technical Reports Server (NTRS)
Wilson, R. B.; Banerjee, P. K.
1987-01-01
This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Sections Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of computer codes that permit more accurate and efficient three-dimensional analyses of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components.
NASA Technical Reports Server (NTRS)
1992-01-01
The technical effort and computer code developed during the first year are summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis.
40 CFR 86.094-13 - Light-duty exhaust durability programs.
Code of Federal Regulations, 2011 CFR
2011-07-01
... and Heavy-Duty Engines, and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled... selection methods, durability data vehicle compliance requirements, in-use verification requirements... provisions of § 86.094-25. (3) Vehicle/component selection method. Durability data vehicles shall be selected...
Zhou, Yan; Cao, Hui
2013-01-01
We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
Absolute cosine-based SVM-RFE feature selection method for prostate histopathological grading.
Sahran, Shahnorbanun; Albashish, Dheeb; Abdullah, Azizi; Shukor, Nordashima Abd; Hayati Md Pauzi, Suria
2018-04-18
Feature selection (FS) methods are widely used in grading and diagnosing prostate histopathological images. In this context, FS is based on the texture features obtained from the lumen, nuclei, cytoplasm and stroma, all of which are important tissue components. However, it is difficult to represent the high-dimensional textures of these tissue components. To solve this problem, we propose a new FS method that enables the selection of features with minimal redundancy in the tissue components. We categorise tissue images based on the texture of individual tissue components via the construction of a single classifier and also construct an ensemble learning model by merging the values obtained by each classifier. Another issue that arises is overfitting due to the high-dimensional texture of individual tissue components. We propose a new FS method, SVM-RFE(AC), that integrates a Support Vector Machine-Recursive Feature Elimination (SVM-RFE) embedded procedure with an absolute cosine (AC) filter method to prevent redundancy in the selected features of the SV-RFE and an unoptimised classifier in the AC. We conducted experiments on H&E histopathological prostate and colon cancer images with respect to three prostate classifications, namely benign vs. grade 3, benign vs. grade 4 and grade 3 vs. grade 4. The colon benchmark dataset requires a distinction between grades 1 and 2, which are the most difficult cases to distinguish in the colon domain. The results obtained by both the single and ensemble classification models (which uses the product rule as its merging method) confirm that the proposed SVM-RFE(AC) is superior to the other SVM and SVM-RFE-based methods. We developed an FS method based on SVM-RFE and AC and successfully showed that its use enabled the identification of the most crucial texture feature of each tissue component. Thus, it makes possible the distinction between multiple Gleason grades (e.g. grade 3 vs. grade 4) and its performance is far superior to other reported FS methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Ranking and averaging independent component analysis by reproducibility (RAICAR).
Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping
2008-06-01
Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data. Copyright 2007 Wiley-Liss, Inc.
Method and apparatus for component separation using microwave energy
Morrow, Marvin S.; Schechter, Donald E.; Calhoun, Jr., Clyde L.
2001-04-03
A method for separating and recovering components includes the steps of providing at least a first component bonded to a second component by a microwave absorbent adhesive bonding material at a bonding area to form an assembly, the bonding material disposed between the components. Microwave energy is directly and selectively applied to the assembly so that substantially only the bonding material absorbs the microwave energy until the bonding material is at a debonding state. A separation force is applied while the bonding material is at the debonding state to permit disengaging and recovering the components. In addition, an apparatus for practicing the method includes holders for the components.
Selective Sorbents For Purification Of Hydrocarbons
Yang, Ralph T.; Yang, Frances H.; Takahashi, Akira; Hernandez-Maldonado, Arturo J.
2006-04-18
A method for removing thiophene and thiophene compounds from liquid fuel includes contacting the liquid fuel with an adsorbent which preferentially adsorbs the thiophene and thiophene compounds. The adsorption takes place at a selected temperature and pressure, thereby producing a non-adsorbed component and a thiophene/thiophene compound-rich adsorbed component. The adsorbent includes either a metal or a metal ion that is adapted to form p-complexation bonds with the thiophene and/or thiophene compounds, and the preferential adsorption occurs by p-complexation. A further method includes selective removal of aromatic compounds from a mixture of aromatic and aliphatic compounds.
Selective sorbents for purification of hydrocarbons
Yang, Ralph T.; Hernandez-Maldonado, Arturo J.; Yang, Frances H.; Takahashi, Akira
2006-08-22
A method for removing thiophene and thiophene compounds from liquid fuel includes contacting the liquid fuel with an adsorbent which preferentially adsorbs the thiophene and thiophene compounds. The adsorption takes place at a selected temperature and pressure, thereby producing a non-adsorbed component and a thiophene/thiophene compound-rich adsorbed component. The adsorbent includes either a metal or a metal cation that is adapted to form .pi.-complexation bonds with the thiophene and/or thiophene compounds, and the preferential adsorption occurs by .pi.-complexation. A further method includes selective removal of aromatic compounds from a mixture of aromatic and aliphatic compounds.
Selective sorbents for purification of hydrocarbons
Yang, Ralph T.; Yang, Frances H.; Takahashi, Akira; Hernandez-Maldonado, Arturo J.
2006-05-30
A method for removing thiophene and thiophene compounds from liquid fuel includes contacting the liquid fuel with an adsorbent which preferentially adsorbs the thiophene and thiophene compounds. The adsorption takes place at a selected temperature and pressure, thereby producing a non-adsorbed component and a thiophene/thiophene compound-rich adsorbed component. The adsorbent includes either a metal or a metal cation that is adapted to form .pi.-complexation bonds with the thiophene and/or thiophene compounds, and the preferential adsorption occurs by .pi.-complexation. A further method includes selective removal of aromatic compounds from a mixture of aromatic and aliphatic compounds.
Selective sorbents for purification of hydrocartons
Yang, Ralph T.; Yang, Frances H.; Takahashi, Akira; Hermandez-Maldonado, Arturo J.
2006-12-12
A method for removing thiophene and thiophene compounds from liquid fuel includes contacting the liquid fuel with an adsorbent which preferentially adsorbs the thiophene and thiophene compounds. The adsorption takes place at a selected temperature and pressure, thereby producing a non-adsorbed component and a thiophene/thiophene compound-rich adsorbed component. The adsorbent includes either a metal or a metal ion that is adapted to form .pi.-complexation bonds with the thiophene and/or thiophene compounds, and the preferential adsorption occurs by .pi.-complexation. A further method includes selective removal of aromatic compounds from a mixture of aromatic and aliphatic compounds.
Material selection and assembly method of battery pack for compact electric vehicle
NASA Astrophysics Data System (ADS)
Lewchalermwong, N.; Masomtob, M.; Lailuck, V.; Charoenphonphanich, C.
2018-01-01
Battery packs become the key component in electric vehicles (EVs). The main costs of which are battery cells and assembling processes. The battery cell is indeed priced from battery manufacturers while the assembling cost is dependent on battery pack designs. Battery pack designers need overall cost as cheap as possible, but it still requires high performance and more safety. Material selection and assembly method as well as component design are very important to determine the cost-effectiveness of battery modules and battery packs. Therefore, this work presents Decision Matrix, which can aid in the decision-making process of component materials and assembly methods for a battery module design and a battery pack design. The aim of this study is to take the advantage of incorporating Architecture Analysis method into decision matrix methods by capturing best practices for conducting design architecture analysis in full account of key design components critical to ensure efficient and effective development of the designs. The methodology also considers the impacts of choice-alternatives along multiple dimensions. Various alternatives for materials and assembly techniques of battery pack are evaluated, and some sample costs are presented. Due to many components in the battery pack, only seven components which are positive busbar and Z busbar are represented in this paper for using decision matrix methods.
A Selective-Echo Method for Chemical-Shift Imaging of Two-Component Systems
NASA Astrophysics Data System (ADS)
Gerald, Rex E., II; Krasavin, Anatoly O.; Botto, Robert E.
A simple and effective method for selectively imaging either one of two chemical species in a two-component system is presented and demonstrated experimentally. The pulse sequence employed, selective- echo chemical- shift imaging (SECSI), is a hybrid (frequency-selective/ T1-contrast) technique that is executed in a short period of time, utilizes the full Boltzmann magnetization of each chemical species to form the corresponding image, and requires only hard pulses of quadrature phase. This approach provides a direct and unambiguous representation of the spatial distribution of the two chemical species. In addition, the performance characteristics and the advantages of the SECSI sequence are compared on a common basis to those of other pulse sequences.
Li, Ziyi; Safo, Sandra E; Long, Qi
2017-07-11
Sparse principal component analysis (PCA) is a popular tool for dimensionality reduction, pattern recognition, and visualization of high dimensional data. It has been recognized that complex biological mechanisms occur through concerted relationships of multiple genes working in networks that are often represented by graphs. Recent work has shown that incorporating such biological information improves feature selection and prediction performance in regression analysis, but there has been limited work on extending this approach to PCA. In this article, we propose two new sparse PCA methods called Fused and Grouped sparse PCA that enable incorporation of prior biological information in variable selection. Our simulation studies suggest that, compared to existing sparse PCA methods, the proposed methods achieve higher sensitivity and specificity when the graph structure is correctly specified, and are fairly robust to misspecified graph structures. Application to a glioblastoma gene expression dataset identified pathways that are suggested in the literature to be related with glioblastoma. The proposed sparse PCA methods Fused and Grouped sparse PCA can effectively incorporate prior biological information in variable selection, leading to improved feature selection and more interpretable principal component loadings and potentially providing insights on molecular underpinnings of complex diseases.
Inference on the Strength of Balancing Selection for Epistatically Interacting Loci
Buzbas, Erkan Ozge; Joyce, Paul; Rosenberg, Noah A.
2011-01-01
Existing inference methods for estimating the strength of balancing selection in multi-locus genotypes rely on the assumption that there are no epistatic interactions between loci. Complex systems in which balancing selection is prevalent, such as sets of human immune system genes, are known to contain components that interact epistatically. Therefore, current methods may not produce reliable inference on the strength of selection at these loci. In this paper, we address this problem by presenting statistical methods that can account for epistatic interactions in making inference about balancing selection. A theoretical result due to Fearnhead (2006) is used to build a multi-locus Wright-Fisher model of balancing selection, allowing for epistatic interactions among loci. Antagonistic and synergistic types of interactions are examined. The joint posterior distribution of the selection and mutation parameters is sampled by Markov chain Monte Carlo methods, and the plausibility of models is assessed via Bayes factors. As a component of the inference process, an algorithm to generate multi-locus allele frequencies under balancing selection models with epistasis is also presented. Recent evidence on interactions among a set of human immune system genes is introduced as a motivating biological system for the epistatic model, and data on these genes are used to demonstrate the methods. PMID:21277883
NASA Technical Reports Server (NTRS)
Nakazawa, S.
1988-01-01
This annual status report presents the results of work performed during the fourth year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes permitting more accurate and efficient 3-D analysis of selected hot section components, i.e., combustor liners, turbine blades and turbine vanes. The computer codes embody a progression of math models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. Volume 1 of this report discusses the special finite element models developed during the fourth year of the contract.
Lievens, Filip; Sackett, Paul R
2017-01-01
Past reviews and meta-analyses typically conceptualized and examined selection procedures as holistic entities. We draw on the product design literature to propose a modular approach as a complementary perspective to conceptualizing selection procedures. A modular approach means that a product is broken down into its key underlying components. Therefore, we start by presenting a modular framework that identifies the important measurement components of selection procedures. Next, we adopt this modular lens for reviewing the available evidence regarding each of these components in terms of affecting validity, subgroup differences, and applicant perceptions, as well as for identifying new research directions. As a complement to the historical focus on holistic selection procedures, we posit that the theoretical contributions of a modular approach include improved insight into the isolated workings of the different components underlying selection procedures and greater theoretical connectivity among different selection procedures and their literatures. We also outline how organizations can put a modular approach into operation to increase the variety in selection procedures and to enhance the flexibility in designing them. Overall, we believe that a modular perspective on selection procedures will provide the impetus for programmatic and theory-driven research on the different measurement components of selection procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Hughes, I
1998-09-24
The direct analysis of selected components from combinatorial libraries by sensitive methods such as mass spectrometry is potentially more efficient than deconvolution and tagging strategies since additional steps of resynthesis or introduction of molecular tags are avoided. A substituent selection procedure is described that eliminates the mass degeneracy commonly observed in libraries prepared by "split-and-mix" methods, without recourse to high-resolution mass measurements. A set of simple rules guides the choice of substituents such that all components of the library have unique nominal masses. Additional rules extend the scope by ensuring that characteristic isotopic mass patterns distinguish isobaric components. The method is applicable to libraries having from two to four varying substituent groups and can encode from a few hundred to several thousand components. No restrictions are imposed on the manner in which the "self-coded" library is synthesized or screened.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
NASA Technical Reports Server (NTRS)
Boyd, R. K.; Brumfield, J. O.; Campbell, W. J.
1984-01-01
Three feature extraction methods, canonical analysis (CA), principal component analysis (PCA), and band selection, have been applied to Thematic Mapper Simulator (TMS) data in order to evaluate the relative performance of the methods. The results obtained show that CA is capable of providing a transformation of TMS data which leads to better classification results than provided by all seven bands, by PCA, or by band selection. A second conclusion drawn from the study is that TMS bands 2, 3, 4, and 7 (thermal) are most important for landcover classification.
Papaemmanouil, Christina; Tsiafoulis, Constantinos G; Alivertis, Dimitrios; Tzamaloukas, Ouranios; Miltiadou, Despoina; Tzakos, Andreas G; Gerothanassis, Ioannis P
2015-06-10
We report a rapid, direct, and unequivocal spin-chromatographic separation and identification of minor components in the lipid fraction of milk and common dairy products with the use of selective one-dimensional (1D) total correlation spectroscopy (TOCSY) nuclear magnetic resonance (NMR) experiments. The method allows for the complete backbone spin-coupling network to be elucidated even in strongly overlapped regions and in the presence of major components from 4 × 10(2) to 3 × 10(3) stronger NMR signal intensities. The proposed spin-chromatography method does not require any derivatization steps for the lipid fraction, is selective with excellent resolution, is sensitive with quantitation capability, and compares favorably to two-dimensional (2D) TOCSY and gas chromatography-mass spectrometry (GC-MS) methods of analysis. The results of the present study demonstrated that the 1D TOCSY NMR spin-chromatography method can become a procedure of primary interest in food analysis and generally in complex mixture analysis.
EEG feature selection method based on decision tree.
Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun
2015-01-01
This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.
Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong
2017-12-01
Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.
Nasu, Mamiko; Nemoto, Takayuki; Mimura, Hisashi; Sako, Kazuhiro
2013-01-01
Most pharmaceutical drug substances and excipients in formulations exist in a crystalline or amorphous form, and an understanding of their state during manufacture and storage is critically important, particularly in formulated products. Carbon 13 solid-state nuclear magnetic resonance (NMR) spectroscopy is useful for studying the chemical and physical state of pharmaceutical solids in a formulated product. We developed two new selective signal excitation methods in (13) C solid-state NMR to extract the spectrum of a target component from such a mixture. These methods were based on equalization of the proton relaxation time in a single domain via rapid intraproton spin diffusion and the difference in proton spin-lattice relaxation time in the rotating frame ((1) H T1rho) of individual components in the mixture. Introduction of simple pulse sequences to one-dimensional experiments reduced data acquisition time and increased flexibility. We then demonstrated these methods in a commercially available drug and in a mixture of two saccharides, in which the (13) C signals of the target components were selectively excited, and showed them to be applicable to the quantitative analysis of individual components in solid mixtures, such as formulated products, polymorphic mixtures, or mixtures of crystalline and amorphous phases. Copyright © 2012 Wiley Periodicals, Inc.
Multibody model reduction by component mode synthesis and component cost analysis
NASA Technical Reports Server (NTRS)
Spanos, J. T.; Mingori, D. L.
1990-01-01
The classical assumed-modes method is widely used in modeling the dynamics of flexible multibody systems. According to the method, the elastic deformation of each component in the system is expanded in a series of spatial and temporal functions known as modes and modal coordinates, respectively. This paper focuses on the selection of component modes used in the assumed-modes expansion. A two-stage component modal reduction method is proposed combining Component Mode Synthesis (CMS) with Component Cost Analysis (CCA). First, each component model is truncated such that the contribution of the high frequency subsystem to the static response is preserved. Second, a new CMS procedure is employed to assemble the system model and CCA is used to further truncate component modes in accordance with their contribution to a quadratic cost function of the system output. The proposed method is demonstrated with a simple example of a flexible two-body system.
Opportunity for natural selection among some selected population groups of Northeast India
Das, Farida Ahmed; Mithun, Sikdar
2010-01-01
BACKGROUND: Selection potential based on differential fertility and mortality has been computed for seven population groups inhabiting different geographical locations of Northeast India. MATERIALS AND METHODS: Crow’s as well as Johnston and Kensinger’s index have been used for the present purpose. RESULTS AND CONCLUSION: Irrespective of the methodology, the total index of selection was found to be highest among the Deoris followed by the Kaibartas. The lowest selection index was found among the Oraon population. If the relative contribution of fertility and mortality components to the total index is considered to be multiplicative, it is observed that in all these communities the fertility component exceeds that of mortality component, which may indicate initiation of demographic transitional phase in the selected populations with the betterment of healthcare and socioeconomic condition within the last few decades. PMID:21031053
Zhang, Xiao-Chao; Wei, Zhen-Wei; Gong, Xiao-Yun; Si, Xing-Yu; Zhao, Yao-Yao; Yang, Cheng-Dui; Zhang, Si-Chun; Zhang, Xin-Rong
2016-04-29
Integrating droplet-based microfluidics with mass spectrometry is essential to high-throughput and multiple analysis of single cells. Nevertheless, matrix effects such as the interference of culture medium and intracellular components influence the sensitivity and the accuracy of results in single-cell analysis. To resolve this problem, we developed a method that integrated droplet-based microextraction with single-cell mass spectrometry. Specific extraction solvent was used to selectively obtain intracellular components of interest and remove interference of other components. Using this method, UDP-Glc-NAc, GSH, GSSG, AMP, ADP and ATP were successfully detected in single MCF-7 cells. We also applied the method to study the change of unicellular metabolites in the biological process of dysfunctional oxidative phosphorylation. The method could not only realize matrix-free, selective and sensitive detection of metabolites in single cells, but also have the capability for reliable and high-throughput single-cell analysis.
Dual phase magnetic material component and method of forming
Dial, Laura Cerully; DiDomizio, Richard; Johnson, Francis
2017-04-25
A magnetic component having intermixed first and second regions, and a method of preparing that magnetic component are disclosed. The first region includes a magnetic phase and the second region includes a non-magnetic phase. The method includes mechanically masking pre-selected sections of a surface portion of the component by using a nitrogen stop-off material and heat-treating the component in a nitrogen-rich atmosphere at a temperature greater than about 900.degree. C. Both the first and second regions are substantially free of carbon, or contain only limited amounts of carbon; and the second region includes greater than about 0.1 weight % of nitrogen.
Semantic word impressions expressed by hue.
Shinomori, Keizo; Komatsu, Honami
2018-04-01
We investigated the possibility of whether impressions of semantic words showing complex concepts could be stably expressed by hues. Using a paired comparison method, we asked ten subjects to select from a pair of hues the one that more suitably matched a word impression. We employed nine Japanese semantic words and used twelve hues from vivid tones in the practical color coordinate system. As examples of the results, for the word "vigorous" the most frequently selected color was yellow and the least selected was blue to purple; for "tranquil" the most selected was yellow to green and the least selected was red. Principal component analysis of the selection data indicated that the cumulative contribution rate of the first two components was 94.6%, and in the two-dimensional space of the components, all hues were distributed as a hue-circle shape. In addition, comparison with additional data of color impressions measured by a semantic differential method suggested that most semantic word impressions can be stably expressed by hue, but the impression of some words, such as "magnificent" cannot. These results suggest that semantic word impression can be expressed reasonably well by color, and that hues are treated as impressions from the hue circle, not from color categories.
FOR Allocation to Distribution Systems based on Credible Improvement Potential (CIP)
NASA Astrophysics Data System (ADS)
Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.
2017-02-01
This paper describes an algorithm for forced outage rate (FOR) allocation to each section of an electrical distribution system subject to satisfaction of reliability constraints at each load point. These constraints include threshold values of basic reliability indices, for example, failure rate, interruption duration and interruption duration per year at load points. Component improvement potential measure has been used for FOR allocation. Component with greatest magnitude of credible improvement potential (CIP) measure is selected for improving reliability performance. The approach adopted is a monovariable method where one component is selected for FOR allocation and in the next iteration another component is selected for FOR allocation based on the magnitude of CIP. The developed algorithm is implemented on sample radial distribution system.
Grimbergen, M C M; van Swol, C F P; Kendall, C; Verdaasdonk, R M; Stone, N; Bosch, J L H R
2010-01-01
The overall quality of Raman spectra in the near-infrared region, where biological samples are often studied, has benefited from various improvements to optical instrumentation over the past decade. However, obtaining ample spectral quality for analysis is still challenging due to device requirements and short integration times required for (in vivo) clinical applications of Raman spectroscopy. Multivariate analytical methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA), are routinely applied to Raman spectral datasets to develop classification models. Data compression is necessary prior to discriminant analysis to prevent or decrease the degree of over-fitting. The logical threshold for the selection of principal components (PCs) to be used in discriminant analysis is likely to be at a point before the PCs begin to introduce equivalent signal and noise and, hence, include no additional value. Assessment of the signal-to-noise ratio (SNR) at a certain peak or over a specific spectral region will depend on the sample measured. Therefore, the mean SNR over the whole spectral region (SNR(msr)) is determined in the original spectrum as well as for spectra reconstructed from an increasing number of principal components. This paper introduces a method of assessing the influence of signal and noise from individual PC loads and indicates a method of selection of PCs for LDA. To evaluate this method, two data sets with different SNRs were used. The sets were obtained with the same Raman system and the same measurement parameters on bladder tissue collected during white light cystoscopy (set A) and fluorescence-guided cystoscopy (set B). This method shows that the mean SNR over the spectral range in the original Raman spectra of these two data sets is related to the signal and noise contribution of principal component loads. The difference in mean SNR over the spectral range can also be appreciated since fewer principal components can reliably be used in the low SNR data set (set B) compared to the high SNR data set (set A). Despite the fact that no definitive threshold could be found, this method may help to determine the cutoff for the number of principal components used in discriminant analysis. Future analysis of a selection of spectral databases using this technique will allow optimum thresholds to be selected for different applications and spectral data quality levels.
NASA Technical Reports Server (NTRS)
Nakazawa, S.
1987-01-01
This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes that permit more accurate and efficient three-dimensional analysis of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. This report is presented in two volumes. Volume 1 describes effort performed under Task 4B, Special Finite Element Special Function Models, while Volume 2 concentrates on Task 4C, Advanced Special Functions Models.
Selecting and Using Mathematics Methods Texts: Nontrivial Tasks
ERIC Educational Resources Information Center
Harkness, Shelly Sheats; Brass, Amy
2017-01-01
Mathematics methods textbooks/texts are important components of many courses for preservice teachers. Researchers should explore how these texts are selected and used. Within this paper we report the findings of a survey administered electronically to 132 members of the Association of Mathematics Teacher Educators (AMTE) in order to answer the…
Feature Extraction and Selection Strategies for Automated Target Recognition
NASA Technical Reports Server (NTRS)
Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin
2010-01-01
Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.
Feature extraction and selection strategies for automated target recognition
NASA Astrophysics Data System (ADS)
Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin
2010-04-01
Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory regionof- interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.
Leukocyte-reduced blood components: patient benefits and practical applications.
Higgins, V L
1996-05-01
To review the various types of filters used for red blood cell and platelet transfusions and to explain the trend in the use of leukocyte removal filters, practical information about their use, considerations in the selection of a filtration method, and cost-effectiveness issues. Published articles, books, and the author's experience. Leukocyte removal filters are used to reduce complications associated with transfused white blood cells that are contained in units of red blood cells and platelets. These complications include nonhemolytic febrile transfusion reactions (NHFTRs), alloimmunization and refractoriness to platelet transfusion, transfusion-transmitted cytomegalovirus (CMV), and immunomodulation. Leukocyte removal filters may be used at the bedside, in a hospital blood bank, or in a blood collection center. Factors that affect the flow rate of these filters include the variations in the blood component, the equipment used, and filter priming. Studies on the cost-effectiveness of using leukocyte-reduced blood components demonstrate savings based on the reduction of NHFTRs, reduction in the number of blood components used, and the use of filtered blood components as the equivalent of CMV seronegative-screened products. The use of leukocyte-reduced blood components significantly diminishes or prevents many of the adverse transfusion reactions associated with donor white blood cells. Leukocyte removal filters are cost-effective, and filters should be selected based on their ability to consistently achieve low leukocyte residual levels as well as their ease of use. Physicians may order leukocyte-reduced blood components for specific patients, or the components may be used because of an established institutional transfusion policy. Nurses often participate in deciding on a filtration method, primarily based on ease of use. Understanding the considerations in selecting a filtration method will help nurses make appropriate decisions to ensure quality patient care.
Method and apparatus for wind turbine braking
Barbu, Corneliu [Laguna Hills, CA; Teichmann, Ralph [Nishkayuna, NY; Avagliano, Aaron [Houston, TX; Kammer, Leonardo Cesar [Niskayuna, NY; Pierce, Kirk Gee [Simpsonville, SC; Pesetsky, David Samuel [Greenville, SC; Gauchel, Peter [Muenster, DE
2009-02-10
A method for braking a wind turbine including at least one rotor blade coupled to a rotor. The method includes selectively controlling an angle of pitch of the at least one rotor blade with respect to a wind direction based on a design parameter of a component of the wind turbine to facilitate reducing a force induced into the wind turbine component as a result of braking.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
Spectrophotometric Determination of Phenolic Antioxidants in the Presence of Thiols and Proteins.
Avan, Aslı Neslihan; Demirci Çekiç, Sema; Uzunboy, Seda; Apak, Reşat
2016-08-12
Development of easy, practical, and low-cost spectrophotometric methods is required for the selective determination of phenolic antioxidants in the presence of other similar substances. As electron transfer (ET)-based total antioxidant capacity (TAC) assays generally measure the reducing ability of antioxidant compounds, thiols and phenols cannot be differentiated since they are both responsive to the probe reagent. In this study, three of the most common TAC determination methods, namely cupric ion reducing antioxidant capacity (CUPRAC), 2,2'-azinobis(3-ethylbenzothiazoline-6-sulfonic acid) diammonium salt/trolox equivalent antioxidant capacity (ABTS/TEAC), and ferric reducing antioxidant power (FRAP), were tested for the assay of phenolics in the presence of selected thiol and protein compounds. Although the FRAP method is almost non-responsive to thiol compounds individually, surprising overoxidations with large positive deviations from additivity were observed when using this method for (phenols + thiols) mixtures. Among the tested TAC methods, CUPRAC gave the most additive results for all studied (phenol + thiol) and (phenol + protein) mixtures with minimal relative error. As ABTS/TEAC and FRAP methods gave small and large deviations, respectively, from additivity of absorbances arising from these components in mixtures, mercury(II) compounds were added to stabilize the thiol components in the form of Hg(II)-thiol complexes so as to enable selective spectrophotometric determination of phenolic components. This error compensation was most efficient for the FRAP method in testing (thiols + phenols) mixtures.
Ding, Shujing; Dudley, Ed; Plummer, Sue; Tang, Jiandong; Newton, Russell P; Brenton, A Gareth
2006-01-01
A reversed-phase high-performance liquid chromatography/electrospray ionisation mass spectrometry (RP-HPLC/ESI-MS) method was developed and validated for the simultaneous determination of ten major active components in Ginkgo biloba extract (bilobalide, ginkgolides A, B, C, quercetin, kaempferol, isorhamnetin, rutin hydrate, quercetin-3-beta-D-glucoside and quercitrin hydrate) which have not been previously reported to be quantified in a single analysis. The ten components exhibit baseline separation in 50 min by C18 chromatography using a water/1:1 (v/v) methanol/acetonitrile gradient. Quantitation was performed using negative ESI-MS in selected ion monitoring (SIM) mode. Good reproducibility and recovery were obtained by this method. The sensitivity of both UV and different mass spectrometry modes (full scan, selected ion monitoring (SIM), and selected reaction monitoring (SRM)) were compared and both quantitation with and without internal standard were evaluated. The analysis of Ginkgo biloba commercial products showed remarkable variations in the rutin and quercetin content as well as the terpene lactone contents although all the products satisfy the conventional quality control method. Copyright 2006 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Stolc, Viktor (Inventor); Brock, Mathew W. (Inventor)
2011-01-01
Method and system for rapid and accurate determination of each of a sequence of unknown polymer components, such as nucleic acid components. A self-assembling monolayer of a selected substance is optionally provided on an interior surface of a pipette tip, and the interior surface is immersed in a selected liquid. A selected electrical field is impressed in a longitudinal or transverse direction at the tip, a polymer sequence is passed through the tip, and a change in an electrical current signal is measured as each polymer component passes through the tip. Each measured change in electrical current signals is compared with a database of reference signals, with each reference signal identified with a polymer component, to identify the unknown polymer component. The tip preferably has a pore inner diameter of no more than about 40 nm and is prepared by heating and pulling a very small section of a glass tubing.
Adhesive bonding using variable frequency microwave energy
Lauf, Robert J.; McMillan, April D.; Paulauskas, Felix L.; Fathi, Zakaryae; Wei, Jianghua
1998-01-01
Methods of facilitating the adhesive bonding of various components with variable frequency microwave energy are disclosed. The time required to cure a polymeric adhesive is decreased by placing components to be bonded via the adhesive in a microwave heating apparatus having a multimode cavity and irradiated with microwaves of varying frequencies. Methods of uniformly heating various articles having conductive fibers disposed therein are provided. Microwave energy may be selectively oriented to enter an edge portion of an article having conductive fibers therein. An edge portion of an article having conductive fibers therein may be selectively shielded from microwave energy.
Adhesive bonding using variable frequency microwave energy
Lauf, R.J.; McMillan, A.D.; Paulauskas, F.L.; Fathi, Z.; Wei, J.
1998-08-25
Methods of facilitating the adhesive bonding of various components with variable frequency microwave energy are disclosed. The time required to cure a polymeric adhesive is decreased by placing components to be bonded via the adhesive in a microwave heating apparatus having a multimode cavity and irradiated with microwaves of varying frequencies. Methods of uniformly heating various articles having conductive fibers disposed therein are provided. Microwave energy may be selectively oriented to enter an edge portion of an article having conductive fibers therein. An edge portion of an article having conductive fibers therein may be selectively shielded from microwave energy. 26 figs.
Adhesive bonding using variable frequency microwave energy
Lauf, R.J.; McMillan, A.D.; Paulauskas, F.L.; Fathi, Z.; Wei, J.
1998-09-08
Methods of facilitating the adhesive bonding of various components with variable frequency microwave energy are disclosed. The time required to cure a polymeric adhesive is decreased by placing components to be bonded via the adhesive in a microwave heating apparatus having a multimode cavity and irradiated with microwaves of varying frequencies. Methods of uniformly heating various articles having conductive fibers disposed therein are provided. Microwave energy may be selectively oriented to enter an edge portion of an article having conductive fibers therein. An edge portion of an article having conductive fibers therein may be selectively shielded from microwave energy. 26 figs.
Jensen, Jacob S; Egebo, Max; Meyer, Anne S
2008-05-28
Accomplishment of fast tannin measurements is receiving increased interest as tannins are important for the mouthfeel and color properties of red wines. Fourier transform mid-infrared spectroscopy allows fast measurement of different wine components, but quantification of tannins is difficult due to interferences from spectral responses of other wine components. Four different variable selection tools were investigated for the identification of the most important spectral regions which would allow quantification of tannins from the spectra using partial least-squares regression. The study included the development of a new variable selection tool, iterative backward elimination of changeable size intervals PLS. The spectral regions identified by the different variable selection methods were not identical, but all included two regions (1485-1425 and 1060-995 cm(-1)), which therefore were concluded to be particularly important for tannin quantification. The spectral regions identified from the variable selection methods were used to develop calibration models. All four variable selection methods identified regions that allowed an improved quantitative prediction of tannins (RMSEP = 69-79 mg of CE/L; r = 0.93-0.94) as compared to a calibration model developed using all variables (RMSEP = 115 mg of CE/L; r = 0.87). Only minor differences in the performance of the variable selection methods were observed.
Detection of Hydroxyapatite in Calcified Cardiovascular Tissues
Lee, Jae Sam; Morrisett, Joel D.; Tung, Ching-Hsuan
2012-01-01
Objective The objective of this study is to develop a method for selective detection of the calcific (hydroxyapatite) component in human aortic smooth muscle cells in vitro and in calcified cardiovascular tissues ex vivo. This method uses a novel optical molecular imaging contrast dye, Cy-HABP-19, to target calcified cells and tissues. Methods A peptide that mimics the binding affinity of osteocalcin was used to label hydroxyapatite in vitro and ex vivo. Morphological changes in vascular smooth muscle cells were evaluated at an early stage of the mineralization process induced by extrinsic stimuli, osteogenic factors and a magnetic suspension cell culture. Hydroxyapatite components were detected in monolayers of these cells in the presence of osteogenic factors and a magnetic suspension environment. Results Atherosclerotic plaque contains multiple components including lipidic, fibrotic, thrombotic, and calcific materials. Using optical imaging and the Cy-HABP-19 molecular imaging probe, we demonstrated that hydroxyapatite components could be selectively distinguished from various calcium salts in human aortic smooth muscle cells in vitro and in calcified cardiovascular tissues, carotid endarterectomy samples and aortic valves, ex vivo. Conclusion Hydroxyapatite deposits in cardiovascular tissues were selectively detected in the early stage of the calcification process using our Cy-HABP-19 probe. This new probe makes it possible to study the earliest events associated with vascular hydroxyapatite deposition at the cellular and molecular levels. This target-selective molecular imaging probe approach holds high potential for revealing early pathophysiological changes, leading to progression, regression, or stabilization of cardiovascular diseases. PMID:22877867
A method to estimate weight and dimensions of large and small gas turbine engines
NASA Technical Reports Server (NTRS)
Onat, E.; Klees, G. W.
1979-01-01
A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.
System comprising interchangeable electronic controllers and corresponding methods
NASA Technical Reports Server (NTRS)
Steele, Glen F. (Inventor); Salazar, George A. (Inventor)
2009-01-01
A system comprising an interchangeable electronic controller is provided with programming that allows the controller to adapt a behavior that is dependent upon the particular type of function performed by a system or subsystem component. The system reconfigures the controller when the controller is moved from one group of subsystem components to another. A plurality of application programs are provided by a server from which the application program for a particular electronic controller is selected. The selection is based on criteria such as a subsystem component group identifier that identifies the particular type of function associated with the system or subsystem group of components.
NASA Technical Reports Server (NTRS)
Stolc, Viktor (Inventor); Brock, Matthew W (Inventor)
2013-01-01
Method and system for rapid and accurate determination of each of a sequence of unknown polymer components, such as nucleic acid components. A self-assembling monolayer of a selected substance is optionally provided on an interior surface of a pipette tip, and the interior surface is immersed in a selected liquid. A selected electrical field is impressed in a longitudinal direction, or in a transverse direction, in the tip region, a polymer sequence is passed through the tip region, and a change in an electrical current signal is measured as each polymer component passes through the tip region. Each of the measured changes in electrical current signals is compared with a database of reference electrical change signals, with each reference signal corresponding to an identified polymer component, to identify the unknown polymer component with a reference polymer component. The nanopore preferably has a pore inner diameter of no more than about 40 nm and is prepared by heating and pulling a very small section of a glass tubing.
Classification of independent components of EEG into multiple artifact classes.
Frølich, Laura; Andersen, Tobias S; Mørup, Morten
2015-01-01
In this study, we aim to automatically identify multiple artifact types in EEG. We used multinomial regression to classify independent components of EEG data, selecting from 65 spatial, spectral, and temporal features of independent components using forward selection. The classifier identified neural and five nonneural types of components. Between subjects within studies, high classification performances were obtained. Between studies, however, classification was more difficult. For neural versus nonneural classifications, performance was on par with previous results obtained by others. We found that automatic separation of multiple artifact classes is possible with a small feature set. Our method can reduce manual workload and allow for the selective removal of artifact classes. Identifying artifacts during EEG recording may be used to instruct subjects to refrain from activity causing them. Copyright © 2014 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Hu, Gang; Hu, Kai
2018-01-01
The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.
A new method of hybrid frequency hopping signals selection and blind parameter estimation
NASA Astrophysics Data System (ADS)
Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian
2018-04-01
Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.
Embedding Dimension Selection for Adaptive Singular Spectrum Analysis of EEG Signal.
Xu, Shanzhi; Hu, Hai; Ji, Linhong; Wang, Peng
2018-02-26
The recorded electroencephalography (EEG) signal is often contaminated with different kinds of artifacts and noise. Singular spectrum analysis (SSA) is a powerful tool for extracting the brain rhythm from a noisy EEG signal. By analyzing the frequency characteristics of the reconstructed component (RC) and the change rate in the trace of the Toeplitz matrix, it is demonstrated that the embedding dimension is related to the frequency bandwidth of each reconstructed component, in consistence with the component mixing in the singular value decomposition step. A method for selecting the embedding dimension is thereby proposed and verified by simulated EEG signal based on the Markov Process Amplitude (MPA) EEG Model. Real EEG signal is also collected from the experimental subjects under both eyes-open and eyes-closed conditions. The experimental results show that based on the embedding dimension selection method, the alpha rhythm can be extracted from the real EEG signal by the adaptive SSA, which can be effectively utilized to distinguish between the eyes-open and eyes-closed states.
Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang
2007-01-01
We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method. PMID:18288259
Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang
2007-01-01
We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method.
Biochemical and nutritional components of selected honey samples.
Chua, Lee Suan; Adnan, Nur Ardawati
2014-01-01
The purpose of this study was to investigate the relationship of biochemical (enzymes) and nutritional components in the selected honey samples from Malaysia. The relationship is important to estimate the quality of honey based on the concentration of these nutritious components. Such a study is limited for honey samples from tropical countries with heavy rainfall throughout the year. A number of six honey samples that commonly consumed by local people were collected for the study. Both the biochemical and nutritional components were analysed by using standard methods from Association of Official Analytical Chemists (AOAC). Individual monosaccharides, disaccharides and 17 amino acids in honey were determined by using liquid chromatographic method. The results showed that the peroxide activity was positively correlated with moisture content (r = 0.8264), but negatively correlated with carbohydrate content (r = 0.7755) in honey. The chromatographic sugar and free amino acid profiles showed that the honey samples could be clustered based on the type and maturity of honey. Proline explained for 64.9% of the total variance in principle component analysis (PCA). The correlation between honey components and honey quality has been established for the selected honey samples based on their biochemical and nutritional concentrations. PCA results revealed that the ratio of sucrose to maltose could be used to measure honey maturity, whereas proline was the marker compound used to distinguish honey either as floral or honeydew.
Hydrological predictions at a watershed scale are commonly based on extrapolation and upscaling of hydrological behavior at plot and hillslope scales. Yet, dominant hydrological drivers at a hillslope may not be as dominant at the watershed scale because of the heterogeneity of w...
Li, Hailiang; Cui, Xiaoli; Tong, Yan; Gong, Muxin
2012-04-01
To compare inclusion effects and process conditions of two preparation methods-colloid mill and saturated solution-for beta-CD inclusion compound of four traditional Chinese medicine volatile oils and study the relationship between each process condition and volatile oil physical properties and the regularity of selective inclusion of volatile oil components. Volatile oils from Nardostachyos Radix et Rhizoma, Amomi Fructus, Zingiberis Rhizoma and Angelicaesinensis Radix were prepared using two methods in the orthogonal test. These inclusion compounds by optimized processes were assessed and compared by such methods as TLC, IR and scanning electron microscope. Inclusion oils were extracted by steam distillation, and the components found before and after inclusion were analyzed by GC-MS. Analysis showed that new inclusion compounds, but inclusion compounds prepared by the two processes had differences to some extent. The colloid mill method showed a better inclusion effect than the saturated solution method, indicating that their process conditions had relations with volatile oil physical properties. There were differences in the inclusion selectivity of components between each other. The colloid mill method for inclusion preparation is more suitable for industrial requirements. To prepare volatile oil inclusion compounds with heavy gravity and high refractive index, the colloid mill method needs longer time and more water, while the saturated solution method requires higher temperature and more beta-cyclodextrin. The inclusion complex prepared with the colloid mill method contains extended molecular weight chemical composition, but the kinds of components are reduced.
An odorant congruent with a colour cue is selectively perceived in an odour mixture.
Arao, Mari; Suzuki, Maya; Katayama, Jun'ich; Akihiro, Yagi
2012-01-01
Odour identification can be influenced by colour cues. This study examined the mechanism underlying this colour context effect. We hypothesised that a specific odour component congruent with a colour would be selectively perceived in preference to another odour component in a binary odour mixture. We used a ratio estimation method under two colour conditions, a binary odour mixture (experiment 1) and single chemicals presented individually (experiment 2). Each colour was congruent with one of the odour components. Participants judged the perceived mixture ratio in each odour container on which a colour patch was pasted. An influence of colour was not observed when the odour stimulus did not contain the odour component congruent with the colour (experiment 2); however, the odour component congruent with the colour was perceived as more dominant when the odour stimulus did contain the colour-congruent odorant (experiment 1). This pattern indicates that a colour-congruent odour component is selectively perceived in an odour mixture. This finding suggests that colours can enhance the perceptual representation of the colour-associated component in an odour mixture.
Simulating the component counts of combinatorial structures.
Arratia, Richard; Barbour, A D; Ewens, W J; Tavaré, Simon
2018-02-09
This article describes and compares methods for simulating the component counts of random logarithmic combinatorial structures such as permutations and mappings. We exploit the Feller coupling for simulating permutations to provide a very fast method for simulating logarithmic assemblies more generally. For logarithmic multisets and selections, this approach is replaced by an acceptance/rejection method based on a particular conditioning relationship that represents the distribution of the combinatorial structure as that of independent random variables conditioned on a weighted sum. We show how to improve its acceptance rate. We illustrate the method by estimating the probability that a random mapping has no repeated component sizes, and establish the asymptotic distribution of the difference between the number of components and the number of distinct component sizes for a very general class of logarithmic structures. Copyright © 2018. Published by Elsevier Inc.
Calibration method and apparatus for measuring the concentration of components in a fluid
Durham, M.D.; Sagan, F.J.; Burkhardt, M.R.
1993-12-21
A calibration method and apparatus for use in measuring the concentrations of components of a fluid is provided. The measurements are determined from the intensity of radiation over a selected range of radiation wavelengths using peak-to-trough calculations. The peak-to-trough calculations are simplified by compensating for radiation absorption by the apparatus. The invention also allows absorption characteristics of an interfering fluid component to be accurately determined and negated thereby facilitating analysis of the fluid. 7 figures.
Calibration method and apparatus for measuring the concentration of components in a fluid
Durham, Michael D.; Sagan, Francis J.; Burkhardt, Mark R.
1993-01-01
A calibration method and apparatus for use in measuring the concentrations of components of a fluid is provided. The measurements are determined from the intensity of radiation over a selected range of radiation wavelengths using peak-to-trough calculations. The peak-to-trough calculations are simplified by compensating for radiation absorption by the apparatus. The invention also allows absorption characteristics of an interfering fluid component to be accurately determined and negated thereby facilitating analysis of the fluid.
Spectrophotometric Determination of Phenolic Antioxidants in the Presence of Thiols and Proteins
Avan, Aslı Neslihan; Demirci Çekiç, Sema; Uzunboy, Seda; Apak, Reşat
2016-01-01
Development of easy, practical, and low-cost spectrophotometric methods is required for the selective determination of phenolic antioxidants in the presence of other similar substances. As electron transfer (ET)-based total antioxidant capacity (TAC) assays generally measure the reducing ability of antioxidant compounds, thiols and phenols cannot be differentiated since they are both responsive to the probe reagent. In this study, three of the most common TAC determination methods, namely cupric ion reducing antioxidant capacity (CUPRAC), 2,2′-azinobis(3-ethylbenzothiazoline-6-sulfonic acid) diammonium salt/trolox equivalent antioxidant capacity (ABTS/TEAC), and ferric reducing antioxidant power (FRAP), were tested for the assay of phenolics in the presence of selected thiol and protein compounds. Although the FRAP method is almost non-responsive to thiol compounds individually, surprising overoxidations with large positive deviations from additivity were observed when using this method for (phenols + thiols) mixtures. Among the tested TAC methods, CUPRAC gave the most additive results for all studied (phenol + thiol) and (phenol + protein) mixtures with minimal relative error. As ABTS/TEAC and FRAP methods gave small and large deviations, respectively, from additivity of absorbances arising from these components in mixtures, mercury(II) compounds were added to stabilize the thiol components in the form of Hg(II)-thiol complexes so as to enable selective spectrophotometric determination of phenolic components. This error compensation was most efficient for the FRAP method in testing (thiols + phenols) mixtures. PMID:27529232
Method for directional hydraulic fracturing
Swanson, David E.; Daly, Daniel W.
1994-01-01
A method for directional hydraulic fracturing using borehole seals to confine pressurized fluid in planar permeable regions, comprising: placing a sealant in the hole of a structure selected from geologic or cemented formations to fill the space between a permeable planar component and the geologic or cemented formation in the vicinity of the permeable planar component; making a hydraulic connection between the permeable planar component and a pump; permitting the sealant to cure and thereby provide both mechanical and hydraulic confinement to the permeable planar component; and pumping a fluid from the pump into the permeable planar component to internally pressurize the permeable planar component to initiate a fracture in the formation, the fracture being disposed in the same orientation as the permeable planar component.
Selective inhibition of a multicomponent response can be achieved without cost
Westrick, Zachary; Ivry, Richard B.
2014-01-01
Behavioral flexibility frequently requires the ability to modify an on-going action. In some situations, optimal performance requires modifying some components of an on-going action without interrupting other components of that action. This form of control has been studied with the selective stop-signal task, in which participants are instructed to abort only one movement of a multicomponent response. Previous studies have shown a transient disruption of the nonaborted component, suggesting limitations in our ability to use selective inhibition. This cost has been attributed to a structural limitation associated with the recruitment of a cortico-basal ganglia pathway that allows for the rapid inhibition of action but operates in a relatively generic manner. Using a model-based approach, we demonstrate that, with a modest amount of training and highly compatible stimulus-response mappings, people can perform a selective-stop task without any cost on the nonaborted component. Prior reports of behavioral costs in selective-stop tasks reflect, at least in part, a sampling bias in the method commonly used to estimate such costs. These results suggest that inhibition can be selectively controlled and present a challenge for models of inhibitory control that posit the operation of generic processes. PMID:25339712
[Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].
Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao
2015-05-01
Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 °C. The boiling range of kerosene is 140~250 °C. And the boiling range of diesel is 180~370 °C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are selected for characteristic peak power integration to determine components content of mineral oil mixture of gasoline, kerosene and diesel by optimal algorithm. Compared with single point measurement of peak method and mean method, measurement sensitivity is improved about 50 times. The implementation of high precision measurement of mixture components content of gasoline, kerosene and diesel provides a practical algorithm for components content direct determination of spectra overlapping mixture without chemical separation.
Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao
2015-01-01
Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383
Fast Principal-Component Analysis Reveals Convergent Evolution of ADH1B in Europe and East Asia
Galinsky, Kevin J.; Bhatia, Gaurav; Loh, Po-Ru; Georgiev, Stoyan; Mukherjee, Sayan; Patterson, Nick J.; Price, Alkes L.
2016-01-01
Searching for genetic variants with unusual differentiation between subpopulations is an established approach for identifying signals of natural selection. However, existing methods generally require discrete subpopulations. We introduce a method that infers selection using principal components (PCs) by identifying variants whose differentiation along top PCs is significantly greater than the null distribution of genetic drift. To enable the application of this method to large datasets, we developed the FastPCA software, which employs recent advances in random matrix theory to accurately approximate top PCs while reducing time and memory cost from quadratic to linear in the number of individuals, a computational improvement of many orders of magnitude. We apply FastPCA to a cohort of 54,734 European Americans, identifying 5 distinct subpopulations spanning the top 4 PCs. Using the PC-based test for natural selection, we replicate previously known selected loci and identify three new genome-wide significant signals of selection, including selection in Europeans at ADH1B. The coding variant rs1229984∗T has previously been associated to a decreased risk of alcoholism and shown to be under selection in East Asians; we show that it is a rare example of independent evolution on two continents. We also detect selection signals at IGFBP3 and IGH, which have also previously been associated to human disease. PMID:26924531
Resonance ionization for analytical spectroscopy
Hurst, George S.; Payne, Marvin G.; Wagner, Edward B.
1976-01-01
This invention relates to a method for the sensitive and selective analysis of an atomic or molecular component of a gas. According to this method, the desired neutral component is ionized by one or more resonance photon absorptions, and the resultant ions are measured in a sensitive counter. Numerous energy pathways are described for accomplishing the ionization including the use of one or two tunable pulsed dye lasers.
Composite Load Spectra for Select Space Propulsion Structural Components
NASA Technical Reports Server (NTRS)
Ho, Hing W.; Newell, James F.
1994-01-01
Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.
Variable Selection through Correlation Sifting
NASA Astrophysics Data System (ADS)
Huang, Jim C.; Jojic, Nebojsa
Many applications of computational biology require a variable selection procedure to sift through a large number of input variables and select some smaller number that influence a target variable of interest. For example, in virology, only some small number of viral protein fragments influence the nature of the immune response during viral infection. Due to the large number of variables to be considered, a brute-force search for the subset of variables is in general intractable. To approximate this, methods based on ℓ1-regularized linear regression have been proposed and have been found to be particularly successful. It is well understood however that such methods fail to choose the correct subset of variables if these are highly correlated with other "decoy" variables. We present a method for sifting through sets of highly correlated variables which leads to higher accuracy in selecting the correct variables. The main innovation is a filtering step that reduces correlations among variables to be selected, making the ℓ1-regularization effective for datasets on which many methods for variable selection fail. The filtering step changes both the values of the predictor variables and output values by projections onto components obtained through a computationally-inexpensive principal components analysis. In this paper we demonstrate the usefulness of our method on synthetic datasets and on novel applications in virology. These include HIV viral load analysis based on patients' HIV sequences and immune types, as well as the analysis of seasonal variation in influenza death rates based on the regions of the influenza genome that undergo diversifying selection in the previous season.
Autonomous learning in gesture recognition by using lobe component analysis
NASA Astrophysics Data System (ADS)
Lu, Jian; Weng, Juyang
2007-02-01
Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.
Selective production of chemicals from biomass pyrolysis over metal chlorides supported on zeolite.
Leng, Shuai; Wang, Xinde; Cai, Qiuxia; Ma, Fengyun; Liu, Yue'e; Wang, Jianguo
2013-12-01
Direct biomass conversion into chemicals remains a great challenge because of the complexity of the compounds; hence, this process has attracted less attention than conversion into fuel. In this study, we propose a simple one-step method for converting bagasse into furfural (FF) and acetic acid (AC). In this method, bagasse pyrolysis over ZnCl2/HZSM-5 achieved a high FF and AC yield (58.10%) and a 1.01 FF/AC ratio, but a very low yield of medium-boiling point components. However, bagasse pyrolysis using HZSM-5 alone or ZnCl2 alone still remained large amounts of medium-boiling point components or high-boiling point components. The synergistic effect of HZSM-5 and ZnCl2, which combines pyrolysis, zeolite cracking, and Lewis acid-selective catalysis results in highly efficient bagasse conversion into FF and AC. Therefore, our study provides a novel, simple method for directly converting biomass into high-yield useful chemical. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang
2018-04-01
A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.
A solid-state NMR method to determine domain sizes in multi-component polymer formulations
NASA Astrophysics Data System (ADS)
Schlagnitweit, Judith; Tang, Mingxue; Baias, Maria; Richardson, Sara; Schantz, Staffan; Emsley, Lyndon
2015-12-01
Polymer domain sizes are related to many of the physical properties of polymers. Here we present a solid-state NMR experiment that is capable of measuring domain sizes in multi-component mixtures. The method combines selective excitation of carbon magnetization to isolate a specific component with proton spin diffusion to report on domain size. We demonstrate the method in the context of controlled release formulations, which represents one of today's challenges in pharmaceutical science. We show that we can measure domain sizes of interest in the different components of industrial pharmaceutical formulations at natural isotopic abundance containing various (modified) cellulose derivatives, such as microcrystalline cellulose matrixes that are film-coated with a mixture of ethyl cellulose (EC) and hydroxypropyl cellulose (HPC).
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H. W.; Kurth, R. E.
1991-01-01
The work performed to develop composite load spectra (CLS) for the Space Shuttle Main Engine (SSME) using probabilistic methods. The three methods were implemented to be the engine system influence model. RASCAL was chosen to be the principal method as most component load models were implemented with the method. Validation of RASCAL was performed. High accuracy comparable to the Monte Carlo method can be obtained if a large enough bin size is used. Generic probabilistic models were developed and implemented for load calculations using the probabilistic methods discussed above. Each engine mission, either a real fighter or a test, has three mission phases: the engine start transient phase, the steady state phase, and the engine cut off transient phase. Power level and engine operating inlet conditions change during a mission. The load calculation module provides the steady-state and quasi-steady state calculation procedures with duty-cycle-data option. The quasi-steady state procedure is for engine transient phase calculations. In addition, a few generic probabilistic load models were also developed for specific conditions. These include the fixed transient spike model, the poison arrival transient spike model, and the rare event model. These generic probabilistic load models provide sufficient latitude for simulating loads with specific conditions. For SSME components, turbine blades, transfer ducts, LOX post, and the high pressure oxidizer turbopump (HPOTP) discharge duct were selected for application of the CLS program. They include static pressure loads and dynamic pressure loads for all four components, centrifugal force for the turbine blade, temperatures of thermal loads for all four components, and structural vibration loads for the ducts and LOX posts.
Selective adsorption of flavor-active components on hydrophobic resins.
Saffarionpour, Shima; Sevillano, David Mendez; Van der Wielen, Luuk A M; Noordman, T Reinoud; Brouwer, Eric; Ottens, Marcel
2016-12-09
This work aims to propose an optimum resin that can be used in industrial adsorption process for tuning flavor-active components or removal of ethanol for producing an alcohol-free beer. A procedure is reported for selective adsorption of volatile aroma components from water/ethanol mixtures on synthetic hydrophobic resins. High throughput 96-well microtiter-plates batch uptake experimentation is applied for screening resins for adsorption of esters (i.e. isoamyl acetate, and ethyl acetate), higher alcohols (i.e. isoamyl alcohol and isobutyl alcohol), a diketone (diacetyl) and ethanol. The miniaturized batch uptake method is adapted for adsorption of volatile components, and validated with column breakthrough analysis. The results of single-component adsorption tests on Sepabeads SP20-SS are expressed in single-component Langmuir, Freundlich, and Sips isotherm models and multi-component versions of Langmuir and Sips models are applied for expressing multi-component adsorption results obtained on several tested resins. The adsorption parameters are regressed and the selectivity over ethanol is calculated for each tested component and tested resin. Resin scores for four different scenarios of selective adsorption of esters, higher alcohols, diacetyl, and ethanol are obtained. The optimal resin for adsorption of esters is Sepabeads SP20-SS with resin score of 87% and for selective removal of higher alcohols, XAD16N, and XAD4 from Amberlite resin series are proposed with scores of 80 and 74% respectively. For adsorption of diacetyl, XAD16N and XAD4 resins with score of 86% are the optimum choice and Sepabeads SP2MGS and XAD761 resins showed the highest affinity towards ethanol. Copyright © 2016 Elsevier B.V. All rights reserved.
Nuclear fuel alloys or mixtures and method of making thereof
Mariani, Robert Dominick; Porter, Douglas Lloyd
2016-04-05
Nuclear fuel alloys or mixtures and methods of making nuclear fuel mixtures are provided. Pseudo-binary actinide-M fuel mixtures form alloys and exhibit: body-centered cubic solid phases at low temperatures; high solidus temperatures; and/or minimal or no reaction or inter-diffusion with steel and other cladding materials. Methods described herein through metallurgical and thermodynamics advancements guide the selection of amounts of fuel mixture components by use of phase diagrams. Weight percentages for components of a metallic additive to an actinide fuel are selected in a solid phase region of an isothermal phase diagram taken at a temperature below an upper temperature limit for the resulting fuel mixture in reactor use. Fuel mixtures include uranium-molybdenum-tungsten, uranium-molybdenum-tantalum, molybdenum-titanium-zirconium, and uranium-molybdenum-titanium systems.
Computational composite mechanics for aerospace propulsion structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1986-01-01
Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial fabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating (1) complex composite structural behavior in general and (2) specific aerospace propulsion structural components in particular.
Computational composite mechanics for aerospace propulsion structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1987-01-01
Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial frabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating: (1) complex composite structural behavior in general, and (2) specific aerospace propulsion structural components in particular.
NASA Astrophysics Data System (ADS)
Ishizawa, Y.; Abe, K.; Shirako, G.; Takai, T.; Kato, H.
The electromagnetic compatibility (EMC) control method, system EMC analysis method, and system test method which have been applied to test the components of the MOS-1 satellite are described. The merits and demerits of the problem solving, specification, and system approaches to EMC control are summarized, and the data requirements of the SEMCAP (specification and electromagnetic compatibility analysis program) computer program for verifying the EMI safety margin of the components are sumamrized. Examples of EMC design are mentioned, and the EMC design process and selection method for EMC critical points are shown along with sample EMC test results.
Overview of SDCM - The Spacecraft Design and Cost Model
NASA Technical Reports Server (NTRS)
Ferebee, Melvin J.; Farmer, Jeffery T.; Andersen, Gregory C.; Flamm, Jeffery D.; Badi, Deborah M.
1988-01-01
The Spacecraft Design and Cost Model (SDCM) is a computer-aided design and analysis tool for synthesizing spacecraft configurations, integrating their subsystems, and generating information concerning on-orbit servicing and costs. SDCM uses a bottom-up method in which the cost and performance parameters for subsystem components are first calculated; the model then sums the contributions from individual components in order to obtain an estimate of sizes and costs for each candidate configuration within a selected spacecraft system. An optimum spacraft configuration can then be selected.
Azevedo, C F; Nascimento, M; Silva, F F; Resende, M D V; Lopes, P S; Guimarães, S E F; Glória, L S
2015-10-09
A significant contribution of molecular genetics is the direct use of DNA information to identify genetically superior individuals. With this approach, genome-wide selection (GWS) can be used for this purpose. GWS consists of analyzing a large number of single nucleotide polymorphism markers widely distributed in the genome; however, because the number of markers is much larger than the number of genotyped individuals, and such markers are highly correlated, special statistical methods are widely required. Among these methods, independent component regression, principal component regression, partial least squares, and partial principal components stand out. Thus, the aim of this study was to propose an application of the methods of dimensionality reduction to GWS of carcass traits in an F2 (Piau x commercial line) pig population. The results show similarities between the principal and the independent component methods and provided the most accurate genomic breeding estimates for most carcass traits in pigs.
System and method for detecting cells or components thereof
Porter, Marc D [Ames, IA; Lipert, Robert J [Ames, IA; Doyle, Robert T [Ames, IA; Grubisha, Desiree S [Corona, CA; Rahman, Salma [Ames, IA
2009-01-06
A system and method for detecting a detectably labeled cell or component thereof in a sample comprising one or more cells or components thereof, at least one cell or component thereof of which is detectably labeled with at least two detectable labels. In one embodiment, the method comprises: (i) introducing the sample into one or more flow cells of a flow cytometer, (ii) irradiating the sample with one or more light sources that are absorbed by the at least two detectable labels, the absorption of which is to be detected, and (iii) detecting simultaneously the absorption of light by the at least two detectable labels on the detectably labeled cell or component thereof with an array of photomultiplier tubes, which are operably linked to two or more filters that selectively transmit detectable emissions from the at least two detectable labels.
Haglund, Jr., Richard F.; Ermer, David R.; Baltz-Knorr, Michelle Lee
2004-11-30
A system and method for desorption and ionization of analytes in an ablation medium. In one embodiment, the method includes the steps of preparing a sample having analytes in a medium including at least one component, freezing the sample at a sufficiently low temperature so that at least part of the sample has a phase transition, and irradiating the frozen sample with short-pulse radiation to cause medium ablation and desorption and ionization of the analytes. The method further includes the steps of selecting a resonant vibrational mode of at least one component of the medium and selecting an energy source tuned to emit radiation substantially at the wavelength of the selected resonant vibrational mode. The medium is an electrophoresis medium having polyacrylamide. In one embodiment, the energy source is a laser, where the laser can be a free electron laser tunable to generate short-pulse radiation. Alternatively, the laser can be a solid state laser tunable to generate short-pulse radiation. The laser can emit light at various ranges of wavelength.
A further component analysis for illicit drugs mixtures with THz-TDS
NASA Astrophysics Data System (ADS)
Xiong, Wei; Shen, Jingling; He, Ting; Pan, Rui
2009-07-01
A new method for quantitative analysis of mixtures of illicit drugs with THz time domain spectroscopy was proposed and verified experimentally. In traditional method we need fingerprints of all the pure chemical components. In practical as only the objective components in a mixture and their absorption features are known, it is necessary and important to present a more practical technique for the detection and identification. Our new method of quantitatively inspect of the mixtures of illicit drugs is developed by using derivative spectrum. In this method, the ratio of objective components in a mixture can be obtained on the assumption that all objective components in the mixture and their absorption features are known but the unknown components are not needed. Then methamphetamine and flour, a illicit drug and a common adulterant, were selected for our experiment. The experimental result verified the effectiveness of the method, which suggested that it could be an effective method for quantitative identification of illicit drugs. This THz spectroscopy technique is great significant in the real-world applications of illicit drugs quantitative analysis. It could be an effective method in the field of security and pharmaceuticals inspection.
Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components
NASA Technical Reports Server (NTRS)
1999-01-01
Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure-activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7-7-1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure-activity relationship model suggested is robust and satisfactory.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure–activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7−7−1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure–activity relationship model suggested is robust and satisfactory. PMID:26600858
NASA Technical Reports Server (NTRS)
Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.
1977-01-01
Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.
USDA-ARS?s Scientific Manuscript database
This study investigated the effects of different home food preparation methods on the availability of the total phenolic contents (TPC) and radical scavenging components, as well as the selected health beneficial compounds from fresh blueberries and carrots. High performance liquid chromatography (...
Synthesis of triazole-based unnatural amino acids, triazole bisaminoacids and β-amino triazole has been described via stereo and regioselective one-pot multi-component reaction of sulfamidates, sodium azide, and alkynes under MW irradiation conditions. The developed method is app...
Determination of the optimal number of components in independent components analysis.
Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N
2018-03-01
Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Durham, Michael D.; Stedman, Donald H.; Ebner, Timothy G.; Burkhardt, Mark R.
1991-01-01
A device and method for measuring the concentrations of components of a fluid stream. Preferably, the fluid stream is an in situ gas stream, such as a fossil fuel fired flue gas in a smoke stack. The measurements are determined from the intensity of radiation over a selected range of radiation wavelengths using peak-to-trough calculations. The need for a reference intensity is eliminated.
Detection of hydroxyapatite in calcified cardiovascular tissues.
Lee, Jae Sam; Morrisett, Joel D; Tung, Ching-Hsuan
2012-10-01
The objective of this study is to develop a method for selective detection of the calcific (hydroxyapatite) component in human aortic smooth muscle cells in vitro and in calcified cardiovascular tissues ex vivo. This method uses a novel optical molecular imaging contrast dye, Cy-HABP-19, to target calcified cells and tissues. A peptide that mimics the binding affinity of osteocalcin was used to label hydroxyapatite in vitro and ex vivo. Morphological changes in vascular smooth muscle cells were evaluated at an early stage of the mineralization process induced by extrinsic stimuli, osteogenic factors and a magnetic suspension cell culture. Hydroxyapatite components were detected in monolayers of these cells in the presence of osteogenic factors and a magnetic suspension environment. Atherosclerotic plaque contains multiple components including lipidic, fibrotic, thrombotic, and calcific materials. Using optical imaging and the Cy-HABP-19 molecular imaging probe, we demonstrated that hydroxyapatite components could be selectively distinguished from various calcium salts in human aortic smooth muscle cells in vitro and in calcified cardiovascular tissues, carotid endarterectomy samples and aortic valves, ex vivo. Hydroxyapatite deposits in cardiovascular tissues were selectively detected in the early stage of the calcification process using our Cy-HABP-19 probe. This new probe makes it possible to study the earliest events associated with vascular hydroxyapatite deposition at the cellular and molecular levels. This target-selective molecular imaging probe approach holds high potential for revealing early pathophysiological changes, leading to progression, regression, or stabilization of cardiovascular diseases. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Jin, Mingwu; Deng, Weishu
2018-05-15
There is a spectrum of the progression from healthy control (HC) to mild cognitive impairment (MCI) without conversion to Alzheimer's disease (AD), to MCI with conversion to AD (cMCI), and to AD. This study aims to predict the different disease stages using brain structural information provided by magnetic resonance imaging (MRI) data. The neighborhood component analysis (NCA) is applied to select most powerful features for prediction. The ensemble decision tree classifier is built to predict which group the subject belongs to. The best features and model parameters are determined by cross validation of the training data. Our results show that 16 out of a total of 429 features were selected by NCA using 240 training subjects, including MMSE score and structural measures in memory-related regions. The boosting tree model with NCA features can achieve prediction accuracy of 56.25% on 160 test subjects. Principal component analysis (PCA) and sequential feature selection (SFS) are used for feature selection, while support vector machine (SVM) is used for classification. The boosting tree model with NCA features outperforms all other combinations of feature selection and classification methods. The results suggest that NCA be a better feature selection strategy than PCA and SFS for the data used in this study. Ensemble tree classifier with boosting is more powerful than SVM to predict the subject group. However, more advanced feature selection and classification methods or additional measures besides structural MRI may be needed to improve the prediction performance. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hegazy, Maha Abdel Monem; Fayez, Yasmin Mohammed
2015-04-01
Two different methods manipulating spectrophotometric data have been developed, validated and compared. One is capable of removing the signal of any interfering components at the selected wavelength of the component of interest (univariate). The other includes more variables and extracts maximum information to determine the component of interest in the presence of other components (multivariate). The applied methods are smart, simple, accurate, sensitive, precise and capable of determination of spectrally overlapped antihypertensives; hydrochlorothiazide (HCT), irbesartan (IRB) and candesartan (CAN). Mean centering of ratio spectra (MCR) and concentration residual augmented classical least-squares method (CRACLS) were developed and their efficiency was compared. CRACLS is a simple method that is capable of extracting the pure spectral profiles of each component in a mixture. Correlation was calculated between the estimated and pure spectra and was found to be 0.9998, 0.9987 and 0.9992 for HCT, IRB and CAN, respectively. The methods were successfully determined the three components in bulk powder, laboratory-prepared mixtures, and combined dosage forms. The results obtained were compared statistically with each other and to those of the official methods.
Development of a Rubber-Based Product Using a Mixture Experiment: A Challenging Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaya, Yahya; Piepel, Gregory F.; Caniyilmaz, Erdal
2013-07-01
Many products used in daily life are made by blending two or more components. The properties of such products typically depend on the relative proportions of the components. Experimental design, modeling, and data analysis methods for mixture experiments provide for efficiently determining the component proportions that will yield a product with desired properties. This article presents a case study of the work performed to develop a new rubber formulation for an o-ring (a circular gasket) with requirements specified on 10 product properties. Each step of the study is discussed, including: 1) identifying the objective of the study and requirements formore » properties of the o-ring, 2) selecting the components to vary and specifying the component constraints, 3) constructing a mixture experiment design, 4) measuring the responses and assessing the data, 5) developing property-composition models, 6) selecting the new product formulation, and 7) confirming the selected formulation in manufacturing. The case study includes some challenging and new aspects, which are discussed in the article.« less
A Hybrid Method for Accelerated Simulation of Coulomb Collisions in a Plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caflisch, R; Wang, C; Dimarco, G
2007-10-09
If the collisional time scale for Coulomb collisions is comparable to the characteristic time scales for a plasma, then simulation of Coulomb collisions may be important for computation of kinetic plasma dynamics. This can be a computational bottleneck because of the large number of simulated particles and collisions (or phase-space resolution requirements in continuum algorithms), as well as the wide range of collision rates over the velocity distribution function. This paper considers Monte Carlo simulation of Coulomb collisions using the binary collision models of Takizuka & Abe and Nanbu. It presents a hybrid method for accelerating the computation of Coulombmore » collisions. The hybrid method represents the velocity distribution function as a combination of a thermal component (a Maxwellian distribution) and a kinetic component (a set of discrete particles). Collisions between particles from the thermal component preserve the Maxwellian; collisions between particles from the kinetic component are performed using the method of or Nanbu. Collisions between the kinetic and thermal components are performed by sampling a particle from the thermal component and selecting a particle from the kinetic component. Particles are also transferred between the two components according to thermalization and dethermalization probabilities, which are functions of phase space.« less
NASA Technical Reports Server (NTRS)
Stanley, A. G.; Gauthier, M. K.
1977-01-01
A successful diagnostic technique was developed using a scanning electron microscope (SEM) as a precision tool to determine ionization effects in integrated circuits. Previous SEM methods radiated the entire semiconductor chip or major areas. The large area exposure methods do not reveal the exact components which are sensitive to radiation. To locate these sensitive components a new method was developed, which consisted in successively irradiating selected components on the device chip with equal doses of electrons /10 to the 6th rad (Si)/, while the whole device was subjected to representative bias conditions. A suitable device parameter was measured in situ after each successive irradiation with the beam off.
Method of forming components for a high-temperature secondary electrochemical cell
Mrazek, Franklin C.; Battles, James E.
1983-01-01
A method of forming a component for a high-temperature secondary electrochemical cell having a positive electrode including a sulfide selected from the group consisting of iron sulfides, nickel sulfides, copper sulfides and cobalt sulfides, a negative electrode including an alloy of aluminum and an electrically insulating porous separator between said electrodes. The improvement comprises forming a slurry of solid particles dispersed in a liquid electrolyte such as the lithium chloride-potassium chloride eutetic, casting the slurry into a form having the shape of one of the components and smoothing the exposed surface of the slurry, cooling the cast slurry to form the solid component, and removing same. Electrodes and separators can be thus formed.
Dimensionality and noise in energy selective x-ray imaging
Alvarez, Robert E.
2013-01-01
Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging. Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurement noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator. Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 103. With the soft tissue component, it is 2.7 × 104. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB. Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems. PMID:24320442
Ewing, Alexander C.; Kottke, Melissa J.; Kraft, Joan Marie; Sales, Jessica M.; Brown, Jennifer L.; Goedken, Peggy; Wiener, Jeffrey; Kourtis, Athena P.
2018-01-01
Background African American adolescent females are at elevated risk for unintended pregnancy and sexually transmitted infections (STIs). Dual protection (DP) is defined as concurrent prevention of pregnancy and STIs. This can be achieved by abstinence, consistent condom use, or the dual methods of condoms plus an effective non-barrier contraceptive. Previous clinic-based interventions showed short-term effects on increasing dual method use, but evidence of sustained effects on dual method use and decreased incident pregnancies and STIs are lacking. Methods/Design This manuscript describes the 2GETHER Project. 2GETHER is a randomized controlled trial of a multi-component intervention to increase dual protection use among sexually active African American females aged 14–19 years not desiring pregnancy at a Title X clinic in Atlanta, GA. The intervention is clinic-based and includes a culturally tailored interactive multimedia component and counseling sessions, both to assist in selection of a DP method and to reinforce use of the DP method. The participants are randomized to the study intervention or the standard of care, and followed for 12 months to evaluate how the intervention influences DP method selection and adherence, pregnancy and STI incidence, and participants’ DP knowledge, intentions, and self-efficacy. Discussion The 2GETHER Project is a novel trial to reduce unintended pregnancies and STIs among African American adolescents. The intervention is unique in the comprehensive and complementary nature of its components and its individual tailoring of provider-patient interaction. If the trial interventions are shown to be effective, then it will be reasonable to assess their scalability and applicability in other populations. PMID:28007634
Methods Used in a Recent Computer Selection Study.
ERIC Educational Resources Information Center
Botten, LeRoy H.
A study was conducted at Andrews University, Berrien Springs, Michigan to determine selection of a computer for both academic and administrative purposes. The university has a total enrollment of 2,100 students and includes a college, graduate school and seminary. An initial feasibility study delineated criteria and desirable components of the…
Non-negative matrix factorization in texture feature for classification of dementia with MRI data
NASA Astrophysics Data System (ADS)
Sarwinda, D.; Bustamam, A.; Ardaneswari, G.
2017-07-01
This paper investigates applications of non-negative matrix factorization as feature selection method to select the features from gray level co-occurrence matrix. The proposed approach is used to classify dementia using MRI data. In this study, texture analysis using gray level co-occurrence matrix is done to feature extraction. In the feature extraction process of MRI data, we found seven features from gray level co-occurrence matrix. Non-negative matrix factorization selected three features that influence of all features produced by feature extractions. A Naïve Bayes classifier is adapted to classify dementia, i.e. Alzheimer's disease, Mild Cognitive Impairment (MCI) and normal control. The experimental results show that non-negative factorization as feature selection method able to achieve an accuracy of 96.4% for classification of Alzheimer's and normal control. The proposed method also compared with other features selection methods i.e. Principal Component Analysis (PCA).
Crack detection using resonant ultrasound spectroscopy
Migliori, A.; Bell, T.M.; Rhodes, G.W.
1994-10-04
Method and apparatus are provided for detecting crack-like flaws in components. A plurality of exciting frequencies are generated and applied to a component in a dry condition to obtain a first ultrasonic spectrum of the component. The component is then wet with a selected liquid to penetrate any crack-like flaws in the component. The plurality of exciting frequencies are again applied to the component and a second ultrasonic spectrum of the component is obtained. The wet and dry ultrasonic spectra are then analyzed to determine the second harmonic components in each of the ultrasonic resonance spectra and the second harmonic components are compared to ascertain the presence of crack-like flaws in the component. 5 figs.
Crack detection using resonant ultrasound spectroscopy
Migliori, Albert; Bell, Thomas M.; Rhodes, George W.
1994-01-01
Method and apparatus are provided for detecting crack-like flaws in components. A plurality of exciting frequencies are generated and applied to a component in a dry condition to obtain a first ultrasonic spectrum of the component. The component is then wet with a selected liquid to penetrate any crack-like flaws in the component. The plurality of exciting frequencies are again applied to the component and a second ultrasonic spectrum of the component is obtained. The wet and dry ultrasonic spectra are then analyzed to determine the second harmonic components in each of the ultrasonic resonance spectra and the second harmonic components are compared to ascertain the presence of crack-like flaws in the component.
Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition.
Norman-Haignere, Sam; Kanwisher, Nancy G; McDermott, Josh H
2015-12-16
The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles ("components") whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech. Copyright © 2015 Elsevier Inc. All rights reserved.
Implementing elements of The Physics Suite at a large metropolitan research university
NASA Astrophysics Data System (ADS)
Efthimiou, Costas; Maronde, Dan; McGreevy, Tim; del Barco, Enrique; McCole, Stefanie
2011-07-01
A key question in physics education is the effectiveness of the teaching methods. A curriculum that has been investigated at the University of Central Florida (UCF) over the last two years is the use of particular elements of The Physics Suite. Select sections of the introductory physics classes at UCF have made use of Interactive Lecture Demonstrations as part of the lecture component of the class. The laboratory component of the class has implemented the RealTime Physics curriculum, again in select sections. The remaining sections have continued with the teaching methods traditionally used. Using pre- and post-semester concept inventory tests, a student survey, student interviews, and a standard for successful completion of the course, the preliminary data indicate improved student learning.
ERIC Educational Resources Information Center
Boice, John R., Ed.
BSIC has selected data for inclusion, and a method of presentation that-- (1) provides preliminary data, in comparable form, about all relevant systems building products, (2) surveys within the limits imposed, the problems of compatibility between subsystem components and to identify components which are compatible with one another, (3) identifies…
Jia, Youmei; Cai, Jianfeng; Xin, Huaxia; Feng, Jiatao; Fu, Yanhui; Fu, Qing; Jin, Yu
2017-06-08
A preparative two dimensional hydrophilic interaction liquid chromatography/reversed-phase liquid chromatography (Pre-2D-HILIC/RPLC) method was established to separate and purify the components in Trachelospermum jasminoides . The pigments and strongly polar components were removed from the crude extract after the active carbon decolorization and solid phase extraction processes. A Click XIon column (250 mm×20 mm, 10 μm) was selected as stationary phase and water-acetonitrile as mobile phases in the first dimensional HILIC. Finally, 15 fractions were collected under UV-triggered mode. In the second dimensional RPLC, a C18 column (250 mm×20 mm, 5 μm) was selected and water-acetonitrile was used as mobile phases. As a result, 14 compounds with high purity were obtained, which were further identified by mass spectrometry (MS) and nuclear magnetic resonance (NMR). Finally, 11 lignan compounds and three flavonoid compounds were obtained. The method has a good orthogonality, and can improve the resolution and the peak capacity. It is significant for the separation of complex components from Trachelospermum jasminoides .
Discriminant analysis of resting-state functional connectivity patterns on the Grassmann manifold
NASA Astrophysics Data System (ADS)
Fan, Yong; Liu, Yong; Jiang, Tianzi; Liu, Zhening; Hao, Yihui; Liu, Haihong
2010-03-01
The functional networks, extracted from fMRI images using independent component analysis, have been demonstrated informative for distinguishing brain states of cognitive functions and neurological diseases. In this paper, we propose a novel algorithm for discriminant analysis of functional networks encoded by spatial independent components. The functional networks of each individual are used as bases for a linear subspace, referred to as a functional connectivity pattern, which facilitates a comprehensive characterization of temporal signals of fMRI data. The functional connectivity patterns of different individuals are analyzed on the Grassmann manifold by adopting a principal angle based subspace distance. In conjunction with a support vector machine classifier, a forward component selection technique is proposed to select independent components for constructing the most discriminative functional connectivity pattern. The discriminant analysis method has been applied to an fMRI based schizophrenia study with 31 schizophrenia patients and 31 healthy individuals. The experimental results demonstrate that the proposed method not only achieves a promising classification performance for distinguishing schizophrenia patients from healthy controls, but also identifies discriminative functional networks that are informative for schizophrenia diagnosis.
Methods for providing ceramic matrix composite components with increased thermal capacity
NASA Technical Reports Server (NTRS)
Steibel, James Dale (Inventor); Utah, David Alan (Inventor)
2001-01-01
A method for enhancing the cooling capability of a turbine component made from a ceramic matrix composite. The method improves the thermal performance of the component by producing a surface having increased cooling capacity, thereby allowing the component to operate at a higher temperature. The method tailors the available surface area on the cooling surface of the composite component by depositing a particulate layer of coarse grained ceramic powders of preselected size onto the surface of the ceramic matrix composite component. The size of the particulate is selectively tailored to match the desired surface finish or surface roughness of the article. The article may be designed to have different surface finishes for different locations, so that the application of different sized powders can provide different cooling capabilities at different locations, if desired. The compositions of the particulates are chemically compatible with the ceramic material comprising the outer surface or portion of the ceramic matrix composite. The particulates are applied using a slurry and incorporated into the article by heating to an elevated temperature without melting the matrix, the particulates or the fiber reinforcement.
Durham, M.D.; Stedman, D.H.; Ebner, T.G.; Burkhardt, M.R.
1991-12-03
A device and method are described for measuring the concentrations of components of a fluid stream. Preferably, the fluid stream is an in-situ gas stream, such as a fossil fuel fired flue gas in a smoke stack. The measurements are determined from the intensity of radiation over a selected range of radiation wavelengths using peak-to-trough calculations. The need for a reference intensity is eliminated. 15 figures.
Li, Yong; Ruan, Qiang; Li, Yanli; Ye, Guozhu; Lu, Xin; Lin, Xiaohui; Xu, Guowang
2012-09-14
Non-targeted metabolic profiling is the most widely used method for metabolomics. In this paper, a novel approach was established to transform a non-targeted metabolic profiling method to a pseudo-targeted method using the retention time locking gas chromatography/mass spectrometry-selected ion monitoring (RTL-GC/MS-SIM). To achieve this transformation, an algorithm based on the automated mass spectral deconvolution and identification system (AMDIS), GC/MS raw data and a bi-Gaussian chromatographic peak model was developed. The established GC/MS-SIM method was compared with GC/MS-full scan (the total ion current and extracted ion current, TIC and EIC) methods, it was found that for a typical tobacco leaf extract, 93% components had their relative standard deviations (RSDs) of relative peak areas less than 20% by the SIM method, while 88% by the EIC method and 81% by the TIC method. 47.3% components had their linear correlation coefficient higher than 0.99, compared with 5.0% by the EIC and 6.2% by TIC methods. Multivariate analysis showed the pooled quality control samples clustered more tightly using the developed method than using GC/MS-full scan methods, indicating a better data quality. With the analysis of the variance of the tobacco samples from three different planting regions, 167 differential components (p<0.05) were screened out using the RTL-GC/MS-SIM method, but 151 and 131 by the EIC and TIC methods, respectively. The results show that the developed method not only has a higher sensitivity, better linearity and data quality, but also does not need complicated peak alignment among different samples. It is especially suitable for the screening of differential components in the metabolic profiling investigation. Copyright © 2012 Elsevier B.V. All rights reserved.
Phenomenology and treatment of selective mutism.
Kumpulainen, Kirsti
2002-01-01
Selective mutism is a multidimensional childhood disorder in which, according to the most recent studies, biologically mediated temperament and anxiety components seem to play a major role. Several psychotherapy methods have been reported in case studies to be useful, but the disorder is commonly seen to be resistant to change, particularly in cases of long duration. Currently, behaviour modification and other cognitive methods, together with cooperation with the family and the school personnel, are recommended in the treatment of selective mutism. Selective serotonin reuptake inhibitors and selective monoamine oxidase inhibitors have also been reported to be helpful when treating children with selective mutism. At the moment, pharmacotherapy cannot be recommended as the treatment of first choice but if other methods of treatment are not helpful, medication can be included in the treatment scheme. Comprehensive evaluation and treatment of possible primary and comorbid problems that require treatment are also essential.
Morales-Ramos, J A; Rojas, M G; Shapiro-Ilan, D I; Tedders, W L
2011-10-01
We studied the ability of Tenebrio molitor L. (Coleoptera: Tenebrionidae) to self-select optimal ratios of two dietary components to approach nutritional balance and maximum fitness. Relative consumption of wheat bran and dry potato flakes was determined among larvae feeding on four different ratios of these components (10, 20, 30, and 40% potato). Groups of early instars were provided with a measured amount of food and the consumption of each diet component was measured at the end of 4 wk and again 3 wk later. Consumption of diet components by T. molitor larvae deviated significantly from expected ratios indicating nonrandom self-selection. Mean percentages of dry potato consumed were 11.98, 19.16, 19.02, and 19.27% and 11.89, 20.48, 24.67, and 25.97% during the first and second experimental periods for diets with 10, 20, 30, and 40% potato, respectively. Life table analysis was used to determine the fitness of T. molitor developing in the four diet mixtures in a no-choice experiment. The diets were compared among each other and a control diet of wheat bran only. Doubling time was significantly shorter in groups consuming 10 and 20% potato than the control and longer in groups feeding on 30 and 40% potato. The self-selected ratios of the two diet components approached 20% potato, which was the best ratio for development and second best for population growth. Our findings show dietary self-selection behavior in T. molitor larvae, and these findings may lead to new methods for optimizing dietary supplements for T. molitor.
Improving Cluster Analysis with Automatic Variable Selection Based on Trees
2014-12-01
regression trees Daisy DISsimilAritY PAM partitioning around medoids PMA penalized multivariate analysis SPC sparse principal components UPGMA unweighted...unweighted pair-group average method ( UPGMA ). This method measures dissimilarities between all objects in two clusters and takes the average value
Local T1-T2 distribution measurements in porous media
NASA Astrophysics Data System (ADS)
Vashaee, S.; Li, M.; Newling, B.; MacMillan, B.; Marica, F.; Kwak, H. T.; Gao, J.; Al-harbi, A. M.; Balcom, B. J.
2018-02-01
A novel slice-selective T1-T2 measurement is proposed to measure spatially resolved T1-T2 distributions. An adiabatic inversion pulse is employed for slice-selection. The slice-selective pulse is able to select a quasi-rectangular slice, on the order of 1 mm, at an arbitrary position within the sample. The method does not employ conventional selective excitation in which selective excitation is often accomplished by rotation of the longitudinal magnetization in the slice of interest into the transverse plane, but rather a subtraction based on CPMG data acquired with and without adiabatic inversion slice selection. T1 weighting is introduced during recovery from the inversion associated with slice selection. The local T1-T2 distributions measured are of similar quality to bulk T1-T2 measurements. The new method can be employed to characterize oil-water mixtures and other fluids in porous media. The method is beneficial when a coarse spatial distribution of the components is of interest.
Ewing, Alexander C; Kottke, Melissa J; Kraft, Joan Marie; Sales, Jessica M; Brown, Jennifer L; Goedken, Peggy; Wiener, Jeffrey; Kourtis, Athena P
2017-03-01
African American adolescent females are at elevated risk for unintended pregnancy and sexually transmitted infections (STIs). Dual protection (DP) is defined as concurrent prevention of pregnancy and STIs. This can be achieved by abstinence, consistent condom use, or the dual methods of condoms plus an effective non-barrier contraceptive. Previous clinic-based interventions showed short-term effects on increasing dual method use, but evidence of sustained effects on dual method use and decreased incident pregnancies and STIs are lacking. This manuscript describes the 2GETHER Project. 2GETHER is a randomized controlled trial of a multi-component intervention to increase dual protection use among sexually active African American females aged 14-19years not desiring pregnancy at a Title X clinic in Atlanta, GA. The intervention is clinic-based and includes a culturally tailored interactive multimedia component and counseling sessions, both to assist in selection of a DP method and to reinforce use of the DP method. The participants are randomized to the study intervention or the standard of care, and followed for 12months to evaluate how the intervention influences DP method selection and adherence, pregnancy and STI incidence, and participants' DP knowledge, intentions, and self-efficacy. The 2GETHER Project is a novel trial to reduce unintended pregnancies and STIs among African American adolescents. The intervention is unique in the comprehensive and complementary nature of its components and its individual tailoring of provider-patient interaction. If the trial interventions are shown to be effective, then it will be reasonable to assess their scalability and applicability in other populations. Published by Elsevier Inc.
Thermal signature identification system (TheSIS): a spread spectrum temperature cycling method
NASA Astrophysics Data System (ADS)
Merritt, Scott
2015-03-01
NASA GSFC's Thermal Signature Identification System (TheSIS) 1) measures the high order dynamic responses of optoelectronic components to direct sequence spread-spectrum temperature cycling, 2) estimates the parameters of multiple autoregressive moving average (ARMA) or other models the of the responses, 3) and selects the most appropriate model using the Akaike Information Criterion (AIC). Using the AIC-tested model and parameter vectors from TheSIS, one can 1) select high-performing components on a multivariate basis, i.e., with multivariate Figures of Merit (FOMs), 2) detect subtle reversible shifts in performance, and 3) investigate irreversible changes in component or subsystem performance, e.g. aging. We show examples of the TheSIS methodology for passive and active components and systems, e.g. fiber Bragg gratings (FBGs) and DFB lasers with coupled temperature control loops, respectively.
Continuum radiation from active galactic nuclei: A statistical study
NASA Technical Reports Server (NTRS)
Isobe, T.; Feigelson, E. D.; Singh, K. P.; Kembhavi, A.
1986-01-01
The physics of the continuum spectrum of active galactic nuclei (AGNs) was examined using a large data set and rigorous statistical methods. A data base was constructed for 469 objects which include radio selected quasars, optically selected quasars, X-ray selected AGNs, BL Lac objects, and optically unidentified compact radio sources. Each object has measurements of its radio, optical, X-ray core continuum luminosity, though many of them are upper limits. Since many radio sources have extended components, the core component were carefully selected out from the total radio luminosity. With survival analysis statistical methods, which can treat upper limits correctly, these data can yield better statistical results than those previously obtained. A variety of statistical tests are performed, such as the comparison of the luminosity functions in different subsamples, and linear regressions of luminosities in different bands. Interpretation of the results leads to the following tentative conclusions: the main emission mechanism of optically selected quasars and X-ray selected AGNs is thermal, while that of BL Lac objects is synchrotron; radio selected quasars may have two different emission mechanisms in the X-ray band; BL Lac objects appear to be special cases of the radio selected quasars; some compact radio sources show the possibility of synchrotron self-Compton (SSC) in the optical band; and the spectral index between the optical and the X-ray bands depends on the optical luminosity.
Contact-free heart rate measurement using multiple video data
NASA Astrophysics Data System (ADS)
Hung, Pang-Chan; Lee, Kual-Zheng; Tsai, Luo-Wei
2013-10-01
In this paper, we propose a contact-free heart rate measurement method by analyzing sequential images of multiple video data. In the proposed method, skin-like pixels are firstly detected from multiple video data for extracting the color features. These color features are synchronized and analyzed by independent component analysis. A representative component is finally selected among these independent component candidates to measure the HR, which achieves under 2% deviation on average compared with a pulse oximeter in the controllable environment. The advantages of the proposed method include: 1) it uses low cost and high accessibility camera device; 2) it eases users' discomfort by utilizing contact-free measurement; and 3) it achieves the low error rate and the high stability by integrating multiple video data.
ERIC Educational Resources Information Center
Trautner, Hanns Martin
Following an outline of the theoretical approach and method of a study of components of sex role development among German children, results which concern ontogenetic changes in sex role stereotypes preferences are presented. In addition, interrelations of different components of sex role development and cognitive variables are analyzed. The…
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
Method of forming components for a high-temperature secondary electrochemical cell
Mrazek, F.C.; Battles, J.E.
1981-05-22
A method of forming a component for a high-temperature secondary electrochemical cell having a positive electrode including a sulfide selected from the group consisting of iron sulfides, nickel sulfides, copper sulfides and cobalt sulfides, a negative electrode including an alloy of aluminum and an electrically insulating porous separator between said electrodes is described. The improvement comprises forming a slurry of solid particles dispersed in a liquid electrolyte such as the lithium chloride-potassium chloride eutectic, casting the slurry into a form having the shape of one of the components and smoothing the exposed surface of the slurry, cooling the cast slurry to form the solid component, and removing same. Electrodes and separators can be thus formed.
NASA Technical Reports Server (NTRS)
Benzie, M. A.
1998-01-01
The objective of this research project was to examine processing and design parameters in the fabrication of composite components to obtain a better understanding and attempt to minimize springback associated with composite materials. To accomplish this, both processing and design parameters were included in a Taguchi-designed experiment. Composite angled panels were fabricated, by hand layup techniques, and the fabricated panels were inspected for springback effects. This experiment yielded several significant results. The confirmation experiment validated the reproducibility of the factorial effects, error recognized, and experiment as reliable. The material used in the design of tooling needs to be a major consideration when fabricating composite components, as expected. The factors dealing with resin flow, however, raise several potentially serious material and design questions. These questions must be dealt with up front in order to minimize springback: viscosity of the resin, vacuum bagging of the part for cure, and the curing method selected. These factors directly affect design, material selection, and processing methods.
Fabrication of metal/semiconductor nanocomposites by selective laser nano-welding.
Yu, Huiwu; Li, Xiangyou; Hao, Zhongqi; Xiong, Wei; Guo, Lianbo; Lu, Yongfeng; Yi, Rongxing; Li, Jiaming; Yang, Xinyan; Zeng, Xiaoyan
2017-06-01
A green and simple method to prepare metal/semiconductor nanocomposites by selective laser nano-welding metal and semiconductor nanoparticles was presented, in which the sizes, phases, and morphologies of the components can be maintained. Many types of nanocomposites (such as Ag/TiO 2 , Ag/SnO 2 , Ag/ZnO 2 , Pt/TiO 2 , Pt/SnO 2 , and Pt/ZnO) can be prepared by this method and their corresponding performances were enhanced.
Selection methods regulate evolution of cooperation in digital evolution
Lichocki, Paweł; Floreano, Dario; Keller, Laurent
2014-01-01
A key, yet often neglected, component of digital evolution and evolutionary models is the ‘selection method’ which assigns fitness (number of offspring) to individuals based on their performance scores (efficiency in performing tasks). Here, we study with formal analysis and numerical experiments the evolution of cooperation under the five most common selection methods (proportionate, rank, truncation-proportionate, truncation-uniform and tournament). We consider related individuals engaging in a Prisoner's Dilemma game where individuals can either cooperate or defect. A cooperator pays a cost, whereas its partner receives a benefit, which affect their performance scores. These performance scores are translated into fitness by one of the five selection methods. We show that cooperation is positively associated with the relatedness between individuals under all selection methods. By contrast, the change in the performance benefit of cooperation affects the populations’ average level of cooperation only under the proportionate methods. We also demonstrate that the truncation and tournament methods may introduce negative frequency-dependence and lead to the evolution of polymorphic populations. Using the example of the evolution of cooperation, we show that the choice of selection method, though it is often marginalized, can considerably affect the evolutionary dynamics. PMID:24152811
NASA Astrophysics Data System (ADS)
Iwasaki, Ryosuke; Nagaoka, Ryo; Yoshizawa, Shin; Umemura, Shin-ichiro
2018-07-01
Acoustic cavitation bubbles are known to enhance the heating effect in high-intensity focused ultrasound (HIFU) treatment. The detection of cavitation bubbles with high sensitivity and selectivity is required to predict the therapeutic and side effects of cavitation, and ensure the efficacy and safety of the treatment. A pulse inversion (PI) technique has been widely used for imaging microbubbles through enhancing the second-harmonic component of echo signals. However, it has difficulty in separating the nonlinear response of microbubbles from that due to nonlinear propagation. In this study, a triplet pulse (3P) method was investigated to specifically image cavitation bubbles by extracting the 1.5th fractional harmonic component. The proposed 3P method depicted cavitation bubbles with a contrast ratio significantly higher than those in conventional imaging methods with and without PI. The results suggest that the 3P method is effective for specifically detecting microbubbles in cavitation-enhanced HIFU treatment.
NASA Astrophysics Data System (ADS)
Ascari, A.; Fortunato, A.; Liverani, E.; Gamberoni, A.; Tomesani, L.
The application of laser technology to welding of dissimilar AISI316 stainless steel components manufactured with selective laser melting (SLM) and traditional methods has been investigated. The role of laser parameters on weld bead formation has been studied experimentally, with particular attention placed on effects occurring at the interface between the two parts. In order to assess weld bead characteristics, standardised tensile tests were carried out on suitable specimens and the fracture zone was analysed. The results highlighted the possibility of exploiting suitable process parameters to appropriately shape the heat affected and fusion zones in order to maximise the mechanical performance of the component and minimise interactions between the two parent metals in the weld bead.
Method and apparatus for adapting steady flow with cyclic thermodynamics
Swift, Gregory W.; Reid, Robert S.; Ward, William C.
2000-01-01
Energy transfer apparatus has a resonator for supporting standing acoustic waves at a selected frequency with a steady flow process fluid thermodynamic medium and a solid medium having heat capacity. The fluid medium and the solid medium are disposed within the resonator for thermal contact therebetween and for relative motion therebetween. The relative motion is produced by a first means for producing a steady velocity component and second means for producing an oscillating velocity component at the selected frequency and concomitant wavelength of the standing acoustic wave. The oscillating velocity and associated oscillating pressure component provide energy transfer between the steady flow process fluid and the solid medium as the steady flow process fluid moves through the resonator.
The biometric-based module of smart grid system
NASA Astrophysics Data System (ADS)
Engel, E.; Kovalev, I. V.; Ermoshkina, A.
2015-10-01
Within Smart Grid concept the flexible biometric-based module base on Principal Component Analysis (PCA) and selective Neural Network is developed. The formation of the selective Neural Network the biometric-based module uses the method which includes three main stages: preliminary processing of the image, face localization and face recognition. Experiments on the Yale face database show that (i) selective Neural Network exhibits promising classification capability for face detection, recognition problems; and (ii) the proposed biometric-based module achieves near real-time face detection, recognition speed and the competitive performance, as compared to some existing subspaces-based methods.
A Framework for Usability Evaluation in EHR Procurement.
Tyllinen, Mari; Kaipio, Johanna; Lääveri, Tinja
2018-01-01
Usability should be considered already by the procuring organizations when selecting future systems. In this paper, we present a framework for usability evaluation during electronic health record (EHR) system procurement. We describe the objectives of the evaluation, the procedure, selected usability attributes and the evaluation methods to measure them. We also present the emphasis usability had in the selection process. We do not elaborate on the details of the results, the application of methods or gathering of data. Instead we focus on the components of the framework to inform and give an example to other similar procurement projects.
Method and product for phosphosilicate slurry for use in dentistry and related bone cements
Wagh, Arun S.; Primus, Carolyn
2006-08-01
The present invention is directed to magnesium phosphate ceramics and their methods of manufacture. The composition of the invention is produced by combining a mixture of a substantially dry powder component with a liquid component. The substantially dry powder component comprises a sparsely soluble oxide powder, an alkali metal phosphate powder, a sparsely soluble silicate powder, with the balance of the substantially dry powder component comprising at least one powder selected from the group consisting of bioactive powders, biocompatible powders, fluorescent powders, fluoride releasing powders, and radiopaque powders. The liquid component comprises a pH modifying agent, a monovalent alkali metal phosphate in aqueous solution, the balance of the liquid component being water. The use of calcined magnesium oxide as the oxide powder and hydroxylapatite as the bioactive powder produces a self-setting ceramic that is particularly suited for use in dental and orthopedic applications.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2001-01-01
A computer implemented method of processing two-dimensional physical signals includes five basic components and the associated presentation techniques of the results. The first component decomposes the two-dimensional signal into one-dimensional profiles. The second component is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF's) from each profile based on local extrema and/or curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the profiles. In the third component, the IMF's of each profile are then subjected to a Hilbert Transform. The fourth component collates the Hilbert transformed IMF's of the profiles to form a two-dimensional Hilbert Spectrum. A fifth component manipulates the IMF's by, for example, filtering the two-dimensional signal by reconstructing the two-dimensional signal from selected IMF(s).
Superhydrophobic films and methods for making superhydrophobic films
Aytug, Tolga; Paranthaman, Mariappan Parans; Simpson, John T.; Bogorin, Daniela Florentina
2017-09-26
This disclosure relates to methods that include depositing a first component and a second component to form a film including a plurality of nanostructures, and coating the nanostructures with a hydrophobic layer to render the film superhydrophobic. The first component and the second component can be immiscible and phase-separated during the depositing step. The first component and the second component can be independently selected from the group consisting of a metal oxide, a metal nitride, a metal oxynitride, a metal, and combinations thereof. The films can have a thickness greater than or equal to 5 nm; an average surface roughness (Ra) of from 90 to 120 nm, as measured on a 5 .mu.m.times.5 .mu.m area; a surface area of at least 20 m.sup.2/g; a contact angle with a drop of water of at least 120 degrees; and can maintain the contact angle when exposed to harsh conditions.
Kesharaju, Manasa; Nagarajah, Romesh
2015-09-01
The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.
Rapid thermal processing by stamping
Stradins, Pauls; Wang, Qi
2013-03-05
A rapid thermal processing device and methods are provided for thermal processing of samples such as semiconductor wafers. The device has components including a stamp (35) having a stamping surface and a heater or cooler (40) to bring it to a selected processing temperature, a sample holder (20) for holding a sample (10) in position for intimate contact with the stamping surface; and positioning components (25) for moving the stamping surface and the stamp (35) in and away from intimate, substantially non-pressured contact. Methods for using and making such devices are also provided. These devices and methods allow inexpensive, efficient, easily controllable thermal processing.
System and process for aluminization of metal-containing substrates
Chou, Yeong-Shyung; Stevenson, Jeffry W.
2017-12-12
A system and method are detailed for aluminizing surfaces of metallic substrates, parts, and components with a protective alumina layer in-situ. Aluminum (Al) foil sandwiched between the metallic components and a refractory material when heated in an oxidizing gas under a compression load at a selected temperature forms the protective alumina coating on the surface of the metallic components. The alumina coating minimizes evaporation of volatile metals from the metallic substrates, parts, and components in assembled devices that can degrade performance during operation at high temperature.
System and process for aluminization of metal-containing substrates
Chou, Yeong-Shyung; Stevenson, Jeffry W
2015-11-03
A system and method are detailed for aluminizing surfaces of metallic substrates, parts, and components with a protective alumina layer in-situ. Aluminum (Al) foil sandwiched between the metallic components and a refractory material when heated in an oxidizing gas under a compression load at a selected temperature forms the protective alumina coating on the surface of the metallic components. The alumina coating minimizes evaporation of volatile metals from the metallic substrates, parts, and components in assembled devices during operation at high temperature that can degrade performance.
Comprehensive GMO detection using real-time PCR array: single-laboratory validation.
Mano, Junichi; Harada, Mioko; Takabatake, Reona; Furui, Satoshi; Kitta, Kazumi; Nakamura, Kosuke; Akiyama, Hiroshi; Teshima, Reiko; Noritake, Hiromichi; Hatano, Shuko; Futo, Satoshi; Minegishi, Yasutaka; Iizuka, Tayoshi
2012-01-01
We have developed a real-time PCR array method to comprehensively detect genetically modified (GM) organisms. In the method, genomic DNA extracted from an agricultural product is analyzed using various qualitative real-time PCR assays on a 96-well PCR plate, targeting for individual GM events, recombinant DNA (r-DNA) segments, taxon-specific DNAs, and donor organisms of the respective r-DNAs. In this article, we report the single-laboratory validation of both DNA extraction methods and component PCR assays constituting the real-time PCR array. We selected some DNA extraction methods for specified plant matrixes, i.e., maize flour, soybean flour, and ground canola seeds, then evaluated the DNA quantity, DNA fragmentation, and PCR inhibition of the resultant DNA extracts. For the component PCR assays, we evaluated the specificity and LOD. All DNA extraction methods and component PCR assays satisfied the criteria set on the basis of previous reports.
Zhang, Jinming; Cavallari, Jennifer M; Fang, Shona C; Weisskopf, Marc G; Lin, Xihong; Mittleman, Murray A; Christiani, David C
2017-01-01
Background Environmental and occupational exposure to metals is ubiquitous worldwide, and understanding the hazardous metal components in this complex mixture is essential for environmental and occupational regulations. Objective To identify hazardous components from metal mixtures that are associated with alterations in cardiac autonomic responses. Methods Urinary concentrations of 16 types of metals were examined and ‘acceleration capacity’ (AC) and ‘deceleration capacity’ (DC), indicators of cardiac autonomic effects, were quantified from ECG recordings among 54 welders. We fitted linear mixed-effects models with least absolute shrinkage and selection operator (LASSO) to identify metal components that are associated with AC and DC. The Bayesian Information Criterion was used as the criterion for model selection procedures. Results Mercury and chromium were selected for DC analysis, whereas mercury, chromium and manganese were selected for AC analysis through the LASSO approach. When we fitted the linear mixed-effects models with ‘selected’ metal components only, the effect of mercury remained significant. Every 1 µg/L increase in urinary mercury was associated with −0.58 ms (−1.03, –0.13) changes in DC and 0.67 ms (0.25, 1.10) changes in AC. Conclusion Our study suggests that exposure to several metals is associated with impaired cardiac autonomic functions. Our findings should be replicated in future studies with larger sample sizes. PMID:28663305
Earth Observing System (EOS) Advanced Microwave Sounding Unit-A (AMSU-A) Spares Program Plan
NASA Technical Reports Server (NTRS)
Chapman, Weldon
1994-01-01
This plan specifies the spare components to be provided for the EOS/AMSU-A instrument and the general spares philosophy for their procurement. It also address key components not recommended for spares, as well as the schedule and method for obtaining the spares. The selected spares list was generated based on component criticality, reliability, repairability, and availability. An alternative spares list is also proposed based on more stringent fiscal constraints.
Statistical Feature Extraction for Artifact Removal from Concurrent fMRI-EEG Recordings
Liu, Zhongming; de Zwart, Jacco A.; van Gelderen, Peter; Kuo, Li-Wei; Duyn, Jeff H.
2011-01-01
We propose a set of algorithms for sequentially removing artifacts related to MRI gradient switching and cardiac pulsations from electroencephalography (EEG) data recorded during functional magnetic resonance imaging (fMRI). Special emphases are directed upon the use of statistical metrics and methods for the extraction and selection of features that characterize gradient and pulse artifacts. To remove gradient artifacts, we use a channel-wise filtering based on singular value decomposition (SVD). To remove pulse artifacts, we first decompose data into temporally independent components and then select a compact cluster of components that possess sustained high mutual information with the electrocardiogram (ECG). After the removal of these components, the time courses of remaining components are filtered by SVD to remove the temporal patterns phase-locked to the cardiac markers derived from the ECG. The filtered component time courses are then inversely transformed into multi-channel EEG time series free of pulse artifacts. Evaluation based on a large set of simultaneous EEG-fMRI data obtained during a variety of behavioral tasks, sensory stimulations and resting conditions showed excellent data quality and robust performance attainable by the proposed methods. These algorithms have been implemented as a Matlab-based toolbox made freely available for public access and research use. PMID:22036675
Statistical feature extraction for artifact removal from concurrent fMRI-EEG recordings.
Liu, Zhongming; de Zwart, Jacco A; van Gelderen, Peter; Kuo, Li-Wei; Duyn, Jeff H
2012-02-01
We propose a set of algorithms for sequentially removing artifacts related to MRI gradient switching and cardiac pulsations from electroencephalography (EEG) data recorded during functional magnetic resonance imaging (fMRI). Special emphasis is directed upon the use of statistical metrics and methods for the extraction and selection of features that characterize gradient and pulse artifacts. To remove gradient artifacts, we use channel-wise filtering based on singular value decomposition (SVD). To remove pulse artifacts, we first decompose data into temporally independent components and then select a compact cluster of components that possess sustained high mutual information with the electrocardiogram (ECG). After the removal of these components, the time courses of remaining components are filtered by SVD to remove the temporal patterns phase-locked to the cardiac timing markers derived from the ECG. The filtered component time courses are then inversely transformed into multi-channel EEG time series free of pulse artifacts. Evaluation based on a large set of simultaneous EEG-fMRI data obtained during a variety of behavioral tasks, sensory stimulations and resting conditions showed excellent data quality and robust performance attainable with the proposed methods. These algorithms have been implemented as a Matlab-based toolbox made freely available for public access and research use. Published by Elsevier Inc.
Color group selection for computer interfaces
NASA Astrophysics Data System (ADS)
Lyons, Paul; Moretti, Giovanni; Wilson, Mark
2000-06-01
We describe a low-impact method for coloring interfaces harmoniously. The method uses a model that characterizes the overall image including the need for distinguishability between interface components. The degree of visual distinction between one component and other components, and its color strength (which increases with its importance and decreases with its size and longevity), are used in generating a rigid ball-and-stick 'color molecule,' which represents the color relationships between the interface components. The shape of the color molecule is chosen to conform to standard principles of color harmony (like colors harmonize, complementary colors harmonize, cycles in the color space harmonize, and so on). The color molecule's shape is fixed, but its position and orientation within the perceptually uniform color solid are not. The end user of the application chooses a new color scheme for the complete interface by repositioning the molecule within the color space. The molecule's shape and rigidity, and the space's perceptual uniformity, ensures the distinguishability and color harmony of the components are maintained. The system produces a selection of color schemes which often include subtle 'nameless' colors that people rarely choose using conventional color controls, but which blend smoothly into a harmonious color scheme. A new set of equally harmonious color schemes only requires repositioning the color molecule within the space.
Microbially-mediated method for synthesis of non-oxide semiconductor nanoparticles
Phelps, Tommy J.; Lauf, Robert J.; Moon, Ji Won; Rondinone, Adam J.; Love, Lonnie J.; Duty, Chad Edward; Madden, Andrew Stephen; Li, Yiliang; Ivanov, Ilia N.; Rawn, Claudia Jeanette
2014-06-24
The invention is directed to a method for producing non-oxide semiconductor nanoparticles, the method comprising: (a) subjecting a combination of reaction components to conditions conducive to microbially-mediated formation of non-oxide semiconductor nanoparticles, wherein said combination of reaction components comprises i) anaerobic microbes, ii) a culture medium suitable for sustaining said anaerobic microbes, iii) a metal component comprising at least one type of metal ion, iv) a non-metal component containing at least one non-metal selected from the group consisting of S, Se, Te, and As, and v) one or more electron donors that provide donatable electrons to said anaerobic microbes during consumption of the electron donor by said anaerobic microbes; and (b) isolating said non-oxide semiconductor nanoparticles, which contain at least one of said metal ions and at least one of said non-metals. The invention is also directed to non-oxide semiconductor nanoparticle compositions produced as above and having distinctive properties.
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Tsuha, Walter S.
1993-01-01
A two-stage model reduction methodology, combining the classical Component Mode Synthesis (CMS) method and the newly developed Enhanced Projection and Assembly (EP&A) method, is proposed in this research. The first stage of this methodology, called the COmponent Modes Projection and Assembly model REduction (COMPARE) method, involves the generation of CMS mode sets, such as the MacNeal-Rubin mode sets. These mode sets are then used to reduce the order of each component model in the Rayleigh-Ritz sense. The resultant component models are then combined to generate reduced-order system models at various system configurations. A composite mode set which retains important system modes at all system configurations is then selected from these reduced-order system models. In the second stage, the EP&A model reduction method is employed to reduce further the order of the system model generated in the first stage. The effectiveness of the COMPARE methodology has been successfully demonstrated on a high-order, finite-element model of the cruise-configured Galileo spacecraft.
Damping characterization in large structures
NASA Technical Reports Server (NTRS)
Eke, Fidelis O.; Eke, Estelle M.
1991-01-01
This research project has as its main goal the development of methods for selecting the damping characteristics of components of a large structure or multibody system, in such a way as to produce some desired system damping characteristics. The main need for such an analytical device is in the simulation of the dynamics of multibody systems consisting, at least partially, of flexible components. The reason for this need is that all existing simulation codes for multibody systems require component-by-component characterization of complex systems, whereas requirements (including damping) often appear at the overall system level. The main goal was met in large part by the development of a method that will in fact synthesize component damping matrices from a given system damping matrix. The restrictions to the method are that the desired system damping matrix must be diagonal (which is almost always the case) and that interbody connections must be by simple hinges. In addition to the technical outcome, this project contributed positively to the educational and research infrastructure of Tuskegee University - a Historically Black Institution.
Zhao, Li-Ting; Xiang, Yu-Hong; Dai, Yin-Mei; Zhang, Zhuo-Yong
2010-04-01
Near infrared spectroscopy was applied to measure the tissue slice of endometrial tissues for collecting the spectra. A total of 154 spectra were obtained from 154 samples. The number of normal, hyperplasia, and malignant samples was 36, 60, and 58, respectively. Original near infrared spectra are composed of many variables, for example, interference information including instrument errors and physical effects such as particle size and light scatter. In order to reduce these influences, original spectra data should be performed with different spectral preprocessing methods to compress variables and extract useful information. So the methods of spectral preprocessing and wavelength selection have played an important role in near infrared spectroscopy technique. In the present paper the raw spectra were processed using various preprocessing methods including first derivative, multiplication scatter correction, Savitzky-Golay first derivative algorithm, standard normal variate, smoothing, and moving-window median. Standard deviation was used to select the optimal spectral region of 4 000-6 000 cm(-1). Then principal component analysis was used for classification. Principal component analysis results showed that three types of samples could be discriminated completely and the accuracy almost achieved 100%. This study demonstrated that near infrared spectroscopy technology and chemometrics method could be a fast, efficient, and novel means to diagnose cancer. The proposed methods would be a promising and significant diagnosis technique of early stage cancer.
Radiography by selective detection of scatter field velocity components
NASA Technical Reports Server (NTRS)
Dugan, Edward T. (Inventor); Jacobs, Alan M. (Inventor); Shedlock, Daniel (Inventor)
2007-01-01
A reconfigurable collimated radiation detector, system and related method includes at least one collimated radiation detector. The detector has an adjustable collimator assembly including at least one feature, such as a fin, optically coupled thereto. Adjustments to the adjustable collimator selects particular directions of travel of scattered radiation emitted from an irradiated object which reach the detector. The collimated detector is preferably a collimated detector array, where the collimators are independently adjustable. The independent motion capability provides the capability to focus the image by selection of the desired scatter field components. When an array of reconfigurable collimated detectors is provided, separate image data can be obtained from each of the detectors and the respective images cross-correlated and combined to form an enhanced image.
Dimensionality and noise in energy selective x-ray imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alvarez, Robert E.
Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging.Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurementmore » noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator.Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 10{sup 3}. With the soft tissue component, it is 2.7 × 10{sup 4}. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB.Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems.« less
Wang, Li-Li; Zhang, Yun-Bin; Sun, Xiao-Ya; Chen, Sui-Qing
2016-05-08
Establish a quantitative analysis of multi-components by the single marker (QAMS) method for quality evaluation and validate its feasibilities by the simultaneous quantitative assay of four main components in Linderae Reflexae Radix. Four main components of pinostrobin, pinosylvin, pinocembrin, and 3,5-dihydroxy-2-(1- p -mentheneyl)- trans -stilbene were selected as analytes to evaluate the quality by RP-HPLC coupled with a UV-detector. The method was evaluated by a comparison of the quantitative results between the external standard method and QAMS with a different HPLC system. The results showed that no significant differences were found in the quantitative results of the four contents of Linderae Reflexae Radix determined by the external standard method and QAMS (RSD <3%). The contents of four analytes (pinosylvin, pinocembrin, pinostrobin, and Reflexanbene I) in Linderae Reflexae Radix were determined by the single marker of pinosylvin. This fingerprint was the spectra determined by Shimadzu LC-20AT and Waters e2695 HPLC that were equipped with three different columns.
Qin, Zifei; Lin, Pei; Dai, Yi; Yao, Zhihong; Wang, Li; Yao, Xinsheng; Liu, Liyin; Chen, Haifeng
2016-05-01
Allii Macrostemonis Bulbus (named Xiebai in China) is a folk medicine with medicinal values for the treatment of thoracic obstruction and cardialgia, and a food additive as well. However, there is even no quantitative standard for Allii Macrostemonis Bulbus recorded in the current edition of the Chinese Pharmacopeia. Hence, simultaneous assay of multiple components is urgent. In this study, chemometric methods were firstly applied to discover the components with significant fluctuation among multiple Allii Macrostemonis Bulbus samples based on optimized fingerprints. Meanwhile, the major components and main absorbed components in rats were all selected as its representative components. Subsequently, a sensitive method was established for the simultaneous determination of 54 components (15 components for quantification and 39 components for semiquantification) by ultra high performance liquid chromatography coupled with quadrupole time-of-flight tandem mass spectrometry. Moreover, the validated method was successfully applied to evaluate the quality of multiple samples on the market. It became known that multiple Allii Macrostemonis Bulbus samples varied significantly and showed poor consistency. This work illustrated that the proposed approach could improve the quality of Allii Macrostemonis Bulbus, and it also provided a feasible method for quality evaluation of other traditional Chinese medicines. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Legerstee, Jeroen S.; Tulen, Joke H. M.; Dierckx, Bram; Treffers, Philip D. A.; Verhulst, Frank C.; Utens, Elisabeth M. W. J.
2010-01-01
Background: This study examined whether treatment response to stepped-care cognitive-behavioural treatment (CBT) is associated with changes in threat-related selective attention and its specific components in a large clinical sample of anxiety-disordered children. Methods: Ninety-one children with an anxiety disorder were included in the present…
Apollo experience report environmental acceptance testing
NASA Technical Reports Server (NTRS)
Laubach, C. H. M.
1976-01-01
Environmental acceptance testing was used extensively to screen selected spacecraft hardware for workmanship defects and manufacturing flaws. The minimum acceptance levels and durations and methods for their establishment are described. Component selection and test monitoring, as well as test implementation requirements, are included. Apollo spacecraft environmental acceptance test results are summarized, and recommendations for future programs are presented.
Selective preservation and origin of petroleum-forming aquatic kerogen
Hatcher, P.G.; Spiker, E. C.; Szeverenyi, N.M.; Maciel, G.E.
1983-01-01
Studies of a marine algal sapropel from Mangrove Lake, Bermuda, by 13C NMR and stable carbon isotopic methods show that precursors of aquatic kerogen (insoluble, macromolecular, paraffinic humic substances) are primary components of algae and possibly associated bacteria and that these substances survive microbial decomposition and are selectively preserved during early diagenesis. ?? 1983 Nature Publishing Group.
Infrared face recognition based on LBP histogram and KW feature selection
NASA Astrophysics Data System (ADS)
Xie, Zhihua
2014-07-01
The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).
[Study on ecological suitability regionalization of Eucommia ulmoides in Guizhou].
Kang, Chuan-Zhi; Wang, Qing-Qing; Zhou, Tao; Jiang, Wei-Ke; Xiao, Cheng-Hong; Xie, Yu
2014-05-01
To study the ecological suitability regionalization of Eucommia ulmoides, for selecting artificial planting base and high-quality industrial raw material purchase area of the herb in Guizhou. Based on the investigation of 14 Eucommia ulmoides producing areas, pinoresinol diglucoside content and ecological factors were obtained. Using spatial analysis method to carry on ecological suitability regionalization. Meanwhile, combining pinoresinol diglucoside content, the correlation of major active components and environmental factors were analyzed by statistical analysis. The most suitability planting area of Eucommia ulmoides was the northwest of Guizhou. The distribution of Eucommia ulmoides was mainly affected by the type and pH value of soil, and monthly precipitation. The spatial structure of major active components in Eucommia ulmoides were randomly distributed in global space, but had only one aggregation point which had a high positive correlation in local space. The major active components of Eucommia ulmoides had no correlation with altitude, longitude or latitude. Using the spatial analysis method and statistical analysis method, based on environmental factor and pinoresinol diglucoside content, the ecological suitability regionalization of Eucommia ulmoides can provide reference for the selection of suitable planting area, artificial planting base and directing production layout.
Automatic and Direct Identification of Blink Components from Scalp EEG
Kong, Wanzeng; Zhou, Zhanpeng; Hu, Sanqing; Zhang, Jianhai; Babiloni, Fabio; Dai, Guojun
2013-01-01
Eye blink is an important and inevitable artifact during scalp electroencephalogram (EEG) recording. The main problem in EEG signal processing is how to identify eye blink components automatically with independent component analysis (ICA). Taking into account the fact that the eye blink as an external source has a higher sum of correlation with frontal EEG channels than all other sources due to both its location and significant amplitude, in this paper, we proposed a method based on correlation index and the feature of power distribution to automatically detect eye blink components. Furthermore, we prove mathematically that the correlation between independent components and scalp EEG channels can be translating directly from the mixing matrix of ICA. This helps to simplify calculations and understand the implications of the correlation. The proposed method doesn't need to select a template or thresholds in advance, and it works without simultaneously recording an electrooculography (EOG) reference. The experimental results demonstrate that the proposed method can automatically recognize eye blink components with a high accuracy on entire datasets from 15 subjects. PMID:23959240
Modifications of Hinge Mechanisms for the Mobile Launcher
NASA Technical Reports Server (NTRS)
Ganzak, Jacob D.
2018-01-01
The further development and modifications made towards the integration of the upper and lower hinge assemblies for the Exploration Upper Stage umbilical are presented. Investigative work is included to show the process of applying updated NASA Standards within component and assembly drawings for selected manufacturers. Component modifications with the addition of drawings are created to precisely display part geometries and geometric tolerances, along with proper methods of fabrication. Comparison of newly updated components with original Apollo era components is essential to correctly model the part characteristics and parameters, i.e. mass properties, material selection, weldments, and tolerances. 3-Dimensional modeling software is used to demonstrate the necessary improvements. In order to share and corroborate these changes, a document management system is used to store the various components and associated drawings. These efforts will contribute towards the Mobile Launcher for Exploration Mission 2 to provide proper rotation of the Exploration Upper Stage umbilical, necessary for providing cryogenic fill and drain capabilities.
Simplified methods of evaluating colonies for levels of Varroa Sensitive Hygiene (VSH)
USDA-ARS?s Scientific Manuscript database
Varroa sensitive hygiene (VSH) is a trait of honey bees, Apis mellifera, that supports resistance to varroa mites, Varroa destructor. Components of VSH were evaluated to identify simple methods for selection of the trait. Varroa mite population growth was measured in colonies with variable levels of...
Atkinson, David A.
2002-01-01
Methods and apparatus for ion mobility spectrometry and analyte detection and identification verification system are disclosed. The apparatus is configured to be used in an ion mobility spectrometer and includes a plurality of reactant reservoirs configured to contain a plurality of reactants which can be reacted with the sample to form adducts having varying ion mobilities. A carrier fluid, such as air or nitrogen, is used to carry the sample into the spectrometer. The plurality of reactants are configured to be selectively added to the carrier stream by use inlet and outlet manifolds in communication with the reagent reservoirs, the reservoirs being selectively isolatable by valves. The invention further includes a spectrometer having the reagent system described. In the method, a first reactant is used with the sample. Following a positive result, a second reactant is used to determine whether a predicted response occurs. The occurrence of the second predicted response tends to verify the existence of a component of interest within the sample. A third reactant can also be used to provide further verification of the existence of a component of interest. A library can be established of known responses of compounds of interest with various reactants and the results of a specific multi-reactant survey of a sample can be compared against the library to determine whether a component detected in the sample is likely to be a specific component of interest.
Content Analysis of the Concept of Addiction in High School Textbooks of Iran.
Mirzamohammadi, Mohammad Hasan; Mousavi, Sayedeh Zainab; Massah, Omid; Farhoudian, Ali
2017-01-01
This research sought to determine how well the causes of addiction, addiction harms, and prevention of addiction have been noticed in high school textbooks. We used descriptive method to select the main related components of the addiction concept and content analysis method for analyzing the content of textbooks. The study population comprised 61 secondary school curriculum textbooks and study sample consisted of 14 secondary school textbooks selected by purposeful sampling method. The tools for collecting data were "content analysis inventory" which its validity was confirmed by educational and social sciences experts and its reliability has been found to be 91%. About 67 components were prepared for content analysis and were divided to 3 categories of causes, harms, and prevention of addiction. The analysis units in this study comprised phrases, topics, examples, course topics, words, poems, images, questions, tables, and exercises. Results of the study showed that the components of the addiction concept have presented with 212 remarks in the textbooks. Also, the degree of attention given to any of the 3 main components of the addiction concept were presented as follows: causes with 52 (24.52%) remarks, harm with 89 (41.98%) remarks, and prevention with 71 (33.49%) remarks. In high school textbooks, little attention has been paid to the concept of addiction and mostly its biological dimension were addressed while social, personal, familial, and religious dimensions of addiction have been neglected.
Gravity Field of Venus and Comparison with Earth
NASA Technical Reports Server (NTRS)
Bowin, C.
1985-01-01
The acceleration (gravity) anomaly estimates by spacecraft tracking, determined from Doppler residuals, are components of the gravity field directed along the spacecraft Earth line of sight (LOS). These data constitute a set of vector components of a planet's gravity field, the specific component depending upon where the Earth happened to be at the time of each measurement, and they are at varying altitudes above the planet surface. From this data set the gravity field vector components were derived using the method of harmonic splines which imposes a smoothness criterion to select a gravity model compatible with the LOS data. Given the piecewise model it is now possible to upward and downward continue the field quantities desired with a few parameters unlike some other methods which must return to the full dataset for each desired calculation.
Zhu, Jingbo; Liu, Baoyue; Shan, Shibo; Ding, Yanl; Kou, Zinong; Xiao, Wei
2015-08-01
In order to meet the needs of efficient purification of products from natural resources, this paper developed an automatic vacuum liquid chromatographic device (AUTO-VLC) and applied it to the component separation of petroleum ether extracts of Schisandra chinensis (Turcz) Baill. The device was comprised of a solvent system, a 10-position distribution valve, a 3-position changes valve, dynamic axis compress chromatographic columns with three diameters, and a 10-position fraction valve. The programmable logic controller (PLC) S7- 200 was adopted to realize the automatic control and monitoring of the mobile phase changing, column selection, separation time setting and fraction collection. The separation results showed that six fractions (S1-S6) of different chemical components from 100 g Schisandra chinensis (Turcz) Baill. petroleum ether phase were obtained by the AUTO-VLC with 150 mm diameter dynamic axis compress chromatographic column. A new method used for the VLC separation parameters screened by using multiple development TLC was developed and confirmed. The initial mobile phase of AUTO-VLC was selected by taking Rf of all the target compounds ranging from 0 to 0.45 for fist development on the TLC; gradient elution ratio was selected according to k value (the slope of the linear function of Rf value and development times on the TLC) and the resolution of target compounds; elution times (n) were calculated by the formula n ≈ ΔRf/k. A total of four compounds with the purity more than 85% and 13 other components were separated from S5 under the selected conditions for only 17 h. Therefore, the development of the automatic VLC and its method are significant to the automatic and systematic separation of traditional Chinese medicines.
Cong, Fengyu; Puoliväli, Tuomas; Alluri, Vinoo; Sipola, Tuomo; Burunat, Iballa; Toiviainen, Petri; Nandi, Asoke K; Brattico, Elvira; Ristaniemi, Tapani
2014-02-15
Independent component analysis (ICA) has been often used to decompose fMRI data mostly for the resting-state, block and event-related designs due to its outstanding advantage. For fMRI data during free-listening experiences, only a few exploratory studies applied ICA. For processing the fMRI data elicited by 512-s modern tango, a FFT based band-pass filter was used to further pre-process the fMRI data to remove sources of no interest and noise. Then, a fast model order selection method was applied to estimate the number of sources. Next, both individual ICA and group ICA were performed. Subsequently, ICA components whose temporal courses were significantly correlated with musical features were selected. Finally, for individual ICA, common components across majority of participants were found by diffusion map and spectral clustering. The extracted spatial maps (by the new ICA approach) common across most participants evidenced slightly right-lateralized activity within and surrounding the auditory cortices. Meanwhile, they were found associated with the musical features. Compared with the conventional ICA approach, more participants were found to have the common spatial maps extracted by the new ICA approach. Conventional model order selection methods underestimated the true number of sources in the conventionally pre-processed fMRI data for the individual ICA. Pre-processing the fMRI data by using a reasonable band-pass digital filter can greatly benefit the following model order selection and ICA with fMRI data by naturalistic paradigms. Diffusion map and spectral clustering are straightforward tools to find common ICA spatial maps. Copyright © 2013 Elsevier B.V. All rights reserved.
Procedures for estimating confidence intervals for selected method performance parameters.
McClure, F D; Lee, J K
2001-01-01
Procedures for estimating confidence intervals (CIs) for the repeatability variance (sigmar2), reproducibility variance (sigmaR2 = sigmaL2 + sigmar2), laboratory component (sigmaL2), and their corresponding standard deviations sigmar, sigmaR, and sigmaL, respectively, are presented. In addition, CIs for the ratio of the repeatability component to the reproducibility variance (sigmar2/sigmaR2) and the ratio of the laboratory component to the reproducibility variance (sigmaL2/sigmaR2) are also presented.
Controlled placement and orientation of nanostructures
Zettl, Alex K; Yuzvinsky, Thomas D; Fennimore, Adam M
2014-04-08
A method for controlled deposition and orientation of molecular sized nanoelectromechanical systems (NEMS) on substrates is disclosed. The method comprised: forming a thin layer of polymer coating on a substrate; exposing a selected portion of the thin layer of polymer to alter a selected portion of the thin layer of polymer; forming a suspension of nanostructures in a solvent, wherein the solvent suspends the nanostructures and activates the nanostructures in the solvent for deposition; and flowing a suspension of nanostructures across the layer of polymer in a flow direction; thereby: depositing a nanostructure in the suspension of nanostructures only to the selected portion of the thin layer of polymer coating on the substrate to form a deposited nanostructure oriented in the flow direction. By selectively employing portions of the method above, complex NEMS may be built of simpler NEMSs components.
Wang, Liqun; Cardenas, Roberto Bravo; Watson, Clifford
2017-09-08
CDC's Division of Laboratory Sciences developed and validated a new method for the simultaneous detection and measurement of 11 sugars, alditols and humectants in tobacco products. The method uses isotope dilution ultra high performance liquid chromatography coupled with tandem mass spectrometry (UHPLC-MS/MS) and has demonstrated high sensitivity, selectivity, throughput and accuracy, with recoveries ranging from 90% to 113%, limits of detection ranging from 0.0002 to 0.0045μg/mL and coefficients of variation (CV%) ranging from 1.4 to 14%. Calibration curves for all analytes were linear with linearity R 2 values greater than 0.995. Quantification of tobacco components is necessary to characterize tobacco product components and their potential effects on consumer appeal, smoke chemistry and toxicology, and to potentially help distinguish tobacco product categories. The researchers analyzed a variety of tobacco products (e.g., cigarettes, little cigars, cigarillos) using the new method and documented differences in the abundance of selected analytes among product categories. Specifically, differences were detected in levels of selected sugars found in little cigars and cigarettes, which could help address appeal potential and have utility when product category is unknown, unclear, or miscategorized. Copyright © 2017. Published by Elsevier B.V.
Ordered nanoscale domains by infiltration of block copolymers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darling, Seth B.; Elam, Jeffrey; Tseng, Yu-Chih
A method of preparing tunable inorganic patterned nanofeatures by infiltration of a block copolymer scaffold having a plurality of self-assembled periodic polymer microdomains. The method may be used sequential infiltration synthesis (SIS), related to atomic layer deposition (ALD). The method includes selecting a metal precursor that is configured to selectively react with the copolymer unit defining the microdomain but is substantially non-reactive with another polymer unit of the copolymer. A tunable inorganic features is selectively formed on the microdomain to form a hybrid organic/inorganic composite material of the metal precursor and a co-reactant. The organic component may be optionally removedmore » to obtain an inorganic features with patterned nanostructures defined by the configuration of the microdomain.« less
Coupled parametric design of flow control and duct shape
NASA Technical Reports Server (NTRS)
Florea, Razvan (Inventor); Bertuccioli, Luca (Inventor)
2009-01-01
A method for designing gas turbine engine components using a coupled parametric analysis of part geometry and flow control is disclosed. Included are the steps of parametrically defining the geometry of the duct wall shape, parametrically defining one or more flow control actuators in the duct wall, measuring a plurality of performance parameters or metrics (e.g., flow characteristics) of the duct and comparing the results of the measurement with desired or target parameters, and selecting the optimal duct geometry and flow control for at least a portion of the duct, the selection process including evaluating the plurality of performance metrics in a pareto analysis. The use of this method in the design of inter-turbine transition ducts, serpentine ducts, inlets, diffusers, and similar components provides a design which reduces pressure losses and flow profile distortions.
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Kurth, R. E.; Ho, H.
1991-01-01
The objective of this program is to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen posts and system ducting. The first approach will consist of using state of the art probabilistic methods to describe the individual loading conditions and combinations of these loading conditions to synthesize the composite load spectra simulation. The second approach will consist of developing coupled models for composite load spectra simulation which combine the deterministic models for composite load dynamic, acoustic, high pressure, and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients will then be determined using advanced probabilistic simulation methods with and without strategically selected experimental data.
Defects diagnosis in laser brazing using near-infrared signals based on empirical mode decomposition
NASA Astrophysics Data System (ADS)
Cheng, Liyong; Mi, Gaoyang; Li, Shuo; Wang, Chunming; Hu, Xiyuan
2018-03-01
Real-time monitoring of laser welding plays a very important role in the modern automated production and online defects diagnosis is necessary to be implemented. In this study, the status of laser brazing was monitored in real time using an infrared photoelectric sensor. Four kinds of braze seams (including healthy weld, unfilled weld, hole weld and rough surface weld) along with corresponding near-infrared signals were obtained. Further, a new method called Empirical Mode Decomposition (EMD) was proposed to analyze the near-infrared signals. The results showed that the EMD method had a good performance in eliminating the noise on the near-infrared signals. And then, the correlation coefficient was developed for selecting the Intrinsic Mode Function (IMF) more sensitive to the weld defects. A more accurate signal was reconstructed with the selected IMF components. Simultaneously, the spectrum of selected IMF components was solved using fast Fourier transform, and the frequency characteristics were clearly revealed. The frequency energy of different frequency bands was computed to diagnose the defects. There was a significant difference in four types of weld defects. This approach has been proved to be an effective and efficient method for monitoring laser brazing defects.
Investigations of medium wavelength magnetic anomalies in the eastern Pacific using MAGSAT data
NASA Technical Reports Server (NTRS)
Harrison, C. G. A. (Principal Investigator)
1981-01-01
The suitability of using magnetic field measurements obtained by MAGSAT is discussed with regard to resolving the medium wavelength anomaly problem. A procedure for removing the external field component from the measured field is outlined. Various methods of determining crustal magnetizations are examined in light of satellite orbital parameters resulting in the selection of the equivalent source technique for evaluating scalar measurements. A matrix inversion of the vector components is suggested as a method for arriving at a scalar potential representation of the field.
Method for enhancing signals transmitted over optical fibers
Ogle, James W.; Lyons, Peter B.
1983-01-01
A method for spectral equalization of high frequency spectrally broadband signals transmitted through an optical fiber. The broadband signal input is first dispersed by a grating. Narrow spectral components are collected into an array of equalizing fibers. The fibers serve as optical delay lines compensating for material dispersion of each spectral component during transmission. The relative lengths of the individual equalizing fibers are selected to compensate for such prior dispersion. The output of the equalizing fibers couple the spectrally equalized light onto a suitable detector for subsequent electronic processing of the enhanced broadband signal.
Information Gain Based Dimensionality Selection for Classifying Text Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumidu Wijayasekara; Milos Manic; Miles McQueen
2013-06-01
Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexitymore » is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.« less
Linear and nonlinear variable selection in competing risks data.
Ren, Xiaowei; Li, Shanshan; Shen, Changyu; Yu, Zhangsheng
2018-06-15
Subdistribution hazard model for competing risks data has been applied extensively in clinical researches. Variable selection methods of linear effects for competing risks data have been studied in the past decade. There is no existing work on selection of potential nonlinear effects for subdistribution hazard model. We propose a two-stage procedure to select the linear and nonlinear covariate(s) simultaneously and estimate the selected covariate effect(s). We use spectral decomposition approach to distinguish the linear and nonlinear parts of each covariate and adaptive LASSO to select each of the 2 components. Extensive numerical studies are conducted to demonstrate that the proposed procedure can achieve good selection accuracy in the first stage and small estimation biases in the second stage. The proposed method is applied to analyze a cardiovascular disease data set with competing death causes. Copyright © 2018 John Wiley & Sons, Ltd.
Lotfy, Hayam M; Saleh, Sarah S; Hassan, Nagiba Y; Salem, Hesham
2015-01-01
Novel spectrophotometric methods were applied for the determination of the minor component tetryzoline HCl (TZH) in its ternary mixture with ofloxacin (OFX) and prednisolone acetate (PA) in the ratio of (1:5:7.5), and in its binary mixture with sodium cromoglicate (SCG) in the ratio of (1:80). The novel spectrophotometric methods determined the minor component (TZH) successfully in the two selected mixtures by computing the geometrical relationship of either standard addition or subtraction. The novel spectrophotometric methods are: geometrical amplitude modulation (GAM), geometrical induced amplitude modulation (GIAM), ratio H-point standard addition method (RHPSAM) and compensated area under the curve (CAUC). The proposed methods were successfully applied for the determination of the minor component TZH below its concentration range. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.
Transient analysis mode participation for modal survey target mode selection using MSC/NASTRAN DMAP
NASA Technical Reports Server (NTRS)
Barnett, Alan R.; Ibrahim, Omar M.; Sullivan, Timothy L.; Goodnight, Thomas W.
1994-01-01
Many methods have been developed to aid analysts in identifying component modes which contribute significantly to component responses. These modes, typically targeted for dynamic model correlation via a modal survey, are known as target modes. Most methods used to identify target modes are based on component global dynamic behavior. It is sometimes unclear if these methods identify all modes contributing to responses important to the analyst. These responses are usually those in areas of hardware design concerns. One method used to check the completeness of target mode sets and identify modes contributing significantly to important component responses is mode participation. With this method, the participation of component modes in dynamic responses is quantified. Those modes which have high participation are likely modal survey target modes. Mode participation is most beneficial when it is used with responses from analyses simulating actual flight events. For spacecraft, these responses are generated via a structural dynamic coupled loads analysis. Using MSC/NASTRAN DMAP, a method has been developed for calculating mode participation based on transient coupled loads analysis results. The algorithm has been implemented to be compatible with an existing coupled loads methodology and has been used successfully to develop a set of modal survey target modes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumta, Prashant N.; Kadakia, Karan Sandeep; Datta, Moni Kanchan
The invention provides electro-catalyst compositions for an anode electrode of a proton exchange membrane-based water electrolysis system. The compositions include a noble metal component selected from the group consisting of iridium oxide, ruthenium oxide, rhenium oxide and mixtures thereof, and a non-noble metal component selected from the group consisting of tantalum oxide, tin oxide, niobium oxide, titanium oxide, tungsten oxide, molybdenum oxide, yttrium oxide, scandium oxide, cooper oxide, zirconium oxide, nickel oxide and mixtures thereof. Further, the non-noble metal component can include a dopant. The dopant can be at least one element selected from Groups III, V, VI and VIImore » of the Periodic Table. The compositions can be prepared using a surfactant approach or a sol gel approach. Further, the compositions are prepared using noble metal and non-noble metal precursors. Furthermore, a thin film containing the compositions can be deposited onto a substrate to form the anode electrode.« less
Advanced Stirling Duplex Materials Assessment for Potential Venus Mission Heater Head Application
NASA Technical Reports Server (NTRS)
Ritzert, Frank; Nathal, Michael V.; Salem, Jonathan; Jacobson, Nathan; Nesbitt, James
2011-01-01
This report will address materials selection for components in a proposed Venus lander system. The lander would use active refrigeration to allow Space Science instrumentation to survive the extreme environment that exists on the surface of Venus. The refrigeration system would be powered by a Stirling engine-based system and is termed the Advanced Stirling Duplex (ASD) concept. Stirling engine power conversion in its simplest definition converts heat from radioactive decay into electricity. Detailed design decisions will require iterations between component geometries, materials selection, system output, and tolerable risk. This study reviews potential component requirements against known materials performance. A lower risk, evolutionary advance in heater head materials could be offered by nickel-base superalloy single crystals, with expected capability of approximately 1100C. However, the high temperature requirements of the Venus mission may force the selection of ceramics or refractory metals, which are more developmental in nature and may not have a well-developed database or a mature supporting technology base such as fabrication and joining methods.
A study of finite mixture model: Bayesian approach on financial time series data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-07-01
Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.
[Study on Application of NIR Spectral Information Screening in Identification of Maca Origin].
Wang, Yuan-zhong; Zhao, Yan-li; Zhang, Ji; Jin, Hang
2016-02-01
Medicinal and edible plant Maca is rich in various nutrients and owns great medicinal value. Based on near infrared diffuse reflectance spectra, 139 Maca samples collected from Peru and Yunnan were used to identify their geographical origins. Multiplication signal correction (MSC) coupled with second derivative (SD) and Norris derivative filter (ND) was employed in spectral pretreatment. Spectrum range (7,500-4,061 cm⁻¹) was chosen by spectrum standard deviation. Combined with principal component analysis-mahalanobis distance (PCA-MD), the appropriate number of principal components was selected as 5. Based on the spectrum range and the number of principal components selected, two abnormal samples were eliminated by modular group iterative singular sample diagnosis method. Then, four methods were used to filter spectral variable information, competitive adaptive reweighted sampling (CARS), monte carlo-uninformative variable elimination (MC-UVE), genetic algorithm (GA) and subwindow permutation analysis (SPA). The spectral variable information filtered was evaluated by model population analysis (MPA). The results showed that RMSECV(SPA) > RMSECV(CARS) > RMSECV(MC-UVE) > RMSECV(GA), were 2. 14, 2. 05, 2. 02, and 1. 98, and the spectral variables were 250, 240, 250 and 70, respectively. According to the spectral variable filtered, partial least squares discriminant analysis (PLS-DA) was used to build the model, with random selection of 97 samples as training set, and the other 40 samples as validation set. The results showed that, R²: GA > MC-UVE > CARS > SPA, RMSEC and RMSEP: GA < MC-UVE < CARS
Metal-organic framework catalysts for selective cleavage of aryl-ether bonds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allendorf, Mark D.; Stavila, Vitalie
The present invention relates to methods of employing a metal-organic framework (MOF) as a catalyst for cleaving chemical bonds. In particular instances, the MOF results in selective bond cleavage that results in hydrogenolyzis. Furthermore, the MOF catalyst can be reused in multiple cycles. Such MOF-based catalysts can be useful, e.g., to convert biomass components.
Manufacturing Methods and Technology Project Summary Reports
1981-06-01
a tough urethane film. The basic principle is to pump two components to a spinning disc, mixing the components just prior to depositing in a well...and check out an electronic target scoring device using developed scientific principles without drastically modifying existing commercial...equipment. The scoring device selected and installed was an Accubar Model ATS-16D using the underlying physics principle of acoustic shock wave propagation
NASA Technical Reports Server (NTRS)
1981-01-01
The Space Shuttle LWT is divided into zones and subzones. Zones are designated primarily to assist in determining the applicable specifications. A subzone (general Specification) is available for use when the location of the component is known but component design and weight are not well defined. When the location, weight, and mounting configuration of the component are known, specifications for appropriate subzone weight ranges are available. Along with the specifications are vibration, acoustic, shock, transportation, handling, and acceptance test requirements and procedures. A method of selecting applicable vibration, acoustic, and shock specifications is presented.
Amali, Arlin Jose; Sharma, Bikash; Rana, Rohit Kumar
2014-09-15
In analogy to the role of long-chain polyamines in biosilicification, poly-L-lysine facilitates the assembly of nanocomponents to design multifunctional microcapsule structures. The method is demonstrated by the fabrication of a magnetically separable catalyst that accommodates Pd nanoparticles (NPs) as active catalyst, Fe3O4 NPs as magnetic component for easy recovery of the catalyst, and silica NPs to impart stability and selectivity to the catalyst. In addition, polyamines embedded inside the microcapsule prevent the agglomeration of Pd NPs and thus result in efficient catalytic activity in hydrogenation reactions, and the hydrophilic silica surface results in selectivity in reactions depending on the polarity of substrates. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sui, Jing; Adali, Tülay; Pearlson, Godfrey D.; Calhoun, Vince D.
2013-01-01
Extraction of relevant features from multitask functional MRI (fMRI) data in order to identify potential biomarkers for disease, is an attractive goal. In this paper, we introduce a novel feature-based framework, which is sensitive and accurate in detecting group differences (e.g. controls vs. patients) by proposing three key ideas. First, we integrate two goal-directed techniques: coefficient-constrained independent component analysis (CC-ICA) and principal component analysis with reference (PCA-R), both of which improve sensitivity to group differences. Secondly, an automated artifact-removal method is developed for selecting components of interest derived from CC-ICA, with an average accuracy of 91%. Finally, we propose a strategy for optimal feature/component selection, aiming to identify optimal group-discriminative brain networks as well as the tasks within which these circuits are engaged. The group-discriminating performance is evaluated on 15 fMRI feature combinations (5 single features and 10 joint features) collected from 28 healthy control subjects and 25 schizophrenia patients. Results show that a feature from a sensorimotor task and a joint feature from a Sternberg working memory (probe) task and an auditory oddball (target) task are the top two feature combinations distinguishing groups. We identified three optimal features that best separate patients from controls, including brain networks consisting of temporal lobe, default mode and occipital lobe circuits, which when grouped together provide improved capability in classifying group membership. The proposed framework provides a general approach for selecting optimal brain networks which may serve as potential biomarkers of several brain diseases and thus has wide applicability in the neuroimaging research community. PMID:19457398
Program for the development of high temperature electrical materials and components
NASA Technical Reports Server (NTRS)
Neff, W. S.; Lowry, L. R.
1972-01-01
Evaluation of high temperature, space-vacuum performance of selected electrical materials and components, high temperature capacitor development, and evaluation, construction, and endurance testing of compression sealed pyrolytic boron nitride slot insulation are described. The first subject above covered the aging evaluation of electrical devices constructed from selected electrical materials. Individual materials performances were also evaluated and reported. The second subject included study of methods of improving electrical performance of pyrolytic boron nitride capacitors. The third portion was conducted to evaluate the thermal and electrical performance of pyrolytic boron nitride as stator slot liner material under varied temperature and compressive loading. Conclusions and recommendations are presented.
Component Selection for Sterile Compounding.
Dilzer, Richard H
2017-01-01
This article describes the factors to consider, as well as the process of proper component selection, for use in preparing compounded sterile preparations. Special emphasis is placed on individual chemical factors that may impact a preparation's accuracy and potency. Values reported in a typical certificate of analysis are discussed, including methods of identifying any required adjustments to a master formulation or compounding record during the compounding of sterile preparations. Proper screening of the certificate of analysis, the Safety Data Sheet, procedural documentation, and the filing of all certificates of conformance are crucial to the operation of a sterile compounding facility. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
Standard plane localization in ultrasound by radial component model and selective search.
Ni, Dong; Yang, Xin; Chen, Xin; Chin, Chien-Ting; Chen, Siping; Heng, Pheng Ann; Li, Shengli; Qin, Jing; Wang, Tianfu
2014-11-01
Acquisition of the standard plane is crucial for medical ultrasound diagnosis. However, this process requires substantial experience and a thorough knowledge of human anatomy. Therefore it is very challenging for novices and even time consuming for experienced examiners. We proposed a hierarchical, supervised learning framework for automatically detecting the standard plane from consecutive 2-D ultrasound images. We tested this technique by developing a system that localizes the fetal abdominal standard plane from ultrasound video by detecting three key anatomical structures: the stomach bubble, umbilical vein and spine. We first proposed a novel radial component-based model to describe the geometric constraints of these key anatomical structures. We then introduced a novel selective search method which exploits the vessel probability algorithm to produce probable locations for the spine and umbilical vein. Next, using component classifiers trained by random forests, we detected the key anatomical structures at their probable locations within the regions constrained by the radial component-based model. Finally, a second-level classifier combined the results from the component detection to identify an ultrasound image as either a "fetal abdominal standard plane" or a "non- fetal abdominal standard plane." Experimental results on 223 fetal abdomen videos showed that the detection accuracy of our method was as high as 85.6% and significantly outperformed both the full abdomen and the separate anatomy detection methods without geometric constraints. The experimental results demonstrated that our system shows great promise for application to clinical practice. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Singh, Karan; Kochar, Ekta; Prasad, N. G.
2015-01-01
Background Ability to resist temperature shock is an important component of fitness of insects and other ectotherms. Increased resistance to temperature shock is known to affect life-history traits. Temperature shock is also known to affect reproductive traits such as mating ability and viability of gametes. Therefore selection for increased temperature shock resistance can affect the evolution of reproductive traits. Methods We selected replicate populations of Drosophila melanogaster for resistance to cold shock. We then investigated the evolution of reproductive behavior along with other components of fitness- larval survivorship, adult mortality, fecundity, egg viability in these populations. Results We found that larval survivorship, adult mortality and fecundity post cold shock were not significantly different between selected and control populations. However, compared to the control populations, the selected populations laid significantly higher percentage of fertile eggs (egg viability) 24 hours post cold shock. The selected populations had higher mating frequency both with and without cold shock. After being subjected to cold shock, males from the selected populations successfully mated with significantly more non-virgin females and sired significantly more progeny compared to control males. Conclusions A number of studies have reported the evolution of survivorship in response to selection for temperature shock resistance. Our results clearly indicate that adaptation to cold shock can involve changes in components of reproductive fitness. Our results have important implications for our understanding of how reproductive behavior can evolve in response to thermal stress. PMID:26065704
Environmental test planning, selection and standardization aids available
NASA Technical Reports Server (NTRS)
Copeland, E. H.; Foley, J. T.
1968-01-01
Requirements for instrumentation, equipment, and methods to be used in conducting environmental tests on components intended for use by a wide variety of technical personnel of different educational backgrounds, experience, and interests is announced.
Multiprocessor switch with selective pairing
Gara, Alan; Gschwind, Michael K; Salapura, Valentina
2014-03-11
System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switch or a bus
A parallel optimization method for product configuration and supplier selection based on interval
NASA Astrophysics Data System (ADS)
Zheng, Jian; Zhang, Meng; Li, Guoxi
2017-06-01
In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.
Composite patterning devices for soft lithography
Rogers, John A.; Menard, Etienne
2007-03-27
The present invention provides methods, devices and device components for fabricating patterns on substrate surfaces, particularly patterns comprising structures having microsized and/or nanosized features of selected lengths in one, two or three dimensions. The present invention provides composite patterning devices comprising a plurality of polymer layers each having selected mechanical properties, such as Young's Modulus and flexural rigidity, selected physical dimensions, such as thickness, surface area and relief pattern dimensions, and selected thermal properties, such as coefficients of thermal expansion, to provide high resolution patterning on a variety of substrate surfaces and surface morphologies.
Recommendations for the treatment of aging in standard technical specifications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orton, R.D.; Allen, R.P.
1995-09-01
As part of the US Nuclear Regulatory Commission`s Nuclear Plant Aging Research Program, Pacific Northwest Laboratory (PNL) evaluated the standard technical specifications for nuclear power plants to determine whether the current surveillance requirements (SRs) were effective in detecting age-related degradation. Nuclear Plant Aging Research findings for selected systems and components were reviewed to identify the stressors and operative aging mechanisms and to evaluate the methods available to detect, differentiate, and trend the resulting aging degradation. Current surveillance and testing requirements for these systems and components were reviewed for their effectiveness in detecting degraded conditions and for potential contributions to prematuremore » degradation. When the current surveillance and testing requirements appeared ineffective in detecting aging degradation or potentially could contribute to premature degradation, a possible deficiency in the SRs was identified that could result in undetected degradation. Based on this evaluation, PNL developed recommendations for inspection, surveillance, trending, and condition monitoring methods to be incorporated in the SRs to better detect age- related degradation of these selected systems and components.« less
Determination of molecular weight distributions in native and pretreated wood.
Leskinen, Timo; Kelley, Stephen S; Argyropoulos, Dimitris S
2015-03-30
The analysis of native wood components by size-exclusion chromatography (SEC) is challenging. Isolation, derivatization and solubilization of wood polymers is required prior to the analysis. The present approach allowed the determination of molecular weight distributions of the carbohydrates and of lignin in native and processed woods, without preparative component isolation steps. For the first time a component selective SEC analysis of sawdust preparations was made possible by the combination of two selective derivatization methods, namely; ionic liquid assisted benzoylation of the carbohydrate fraction and acetobromination of the lignin in acetic acid media. These were optimized for wood samples. The developed method was thus used to examine changes in softwood samples after degradative mechanical and/or chemical treatments, such as ball milling, steam explosion, green liquor pulping, and chemical oxidation with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ). The methodology can also be applied to examine changes in molecular weight and lignin-carbohydrate linkages that occur during wood-based biorefinery operations, such as pretreatments, and enzymatic saccharification. Copyright © 2014 Elsevier Ltd. All rights reserved.
Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges
2009-02-13
The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.
Computer Based Porosity Design by Multi Phase Topology Optimization
NASA Astrophysics Data System (ADS)
Burblies, Andreas; Busse, Matthias
2008-02-01
A numerical simulation technique called Multi Phase Topology Optimization (MPTO) based on finite element method has been developed and refined by Fraunhofer IFAM during the last five years. MPTO is able to determine the optimum distribution of two or more different materials in components under thermal and mechanical loads. The objective of optimization is to minimize the component's elastic energy. Conventional topology optimization methods which simulate adaptive bone mineralization have got the disadvantage that there is a continuous change of mass by growth processes. MPTO keeps all initial material concentrations and uses methods adapted from molecular dynamics to find energy minimum. Applying MPTO to mechanically loaded components with a high number of different material densities, the optimization results show graded and sometimes anisotropic porosity distributions which are very similar to natural bone structures. Now it is possible to design the macro- and microstructure of a mechanical component in one step. Computer based porosity design structures can be manufactured by new Rapid Prototyping technologies. Fraunhofer IFAM has applied successfully 3D-Printing and Selective Laser Sintering methods in order to produce very stiff light weight components with graded porosities calculated by MPTO.
A robust sparse-modeling framework for estimating schizophrenia biomarkers from fMRI.
Dillon, Keith; Calhoun, Vince; Wang, Yu-Ping
2017-01-30
Our goal is to identify the brain regions most relevant to mental illness using neuroimaging. State of the art machine learning methods commonly suffer from repeatability difficulties in this application, particularly when using large and heterogeneous populations for samples. We revisit both dimensionality reduction and sparse modeling, and recast them in a common optimization-based framework. This allows us to combine the benefits of both types of methods in an approach which we call unambiguous components. We use this to estimate the image component with a constrained variability, which is best correlated with the unknown disease mechanism. We apply the method to the estimation of neuroimaging biomarkers for schizophrenia, using task fMRI data from a large multi-site study. The proposed approach yields an improvement in both robustness of the estimate and classification accuracy. We find that unambiguous components incorporate roughly two thirds of the same brain regions as sparsity-based methods LASSO and elastic net, while roughly one third of the selected regions differ. Further, unambiguous components achieve superior classification accuracy in differentiating cases from controls. Unambiguous components provide a robust way to estimate important regions of imaging data. Copyright © 2016 Elsevier B.V. All rights reserved.
Caprihan, A; Pearlson, G D; Calhoun, V D
2008-08-15
Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.
[Study on sustained release preparations of Epimedium component].
Yan, Hong-mei; Ding, Dong-mei; Zhang, Zhen-hai; Sun, E; Song, Jie; Jia, Xiao-bin
2015-04-01
The formulation for sustained release tablet of Epinedium component was selected and the evaluation equation of in vitro release was established. The liquidity of component was improved with the help of colloidal silica aided by spray drying, which would be the main drug in the sustained release tablets. Dissolution was selected as an evaluation index to investigate skeletal material type, fillers, impact porogen, lubricants and other materials on the quality of sustained release tablet. The sustained release tablets were prepared by dry compression. Formulation of sustained release preparations was main drug 35%, HPMC K(4M) 20% and HPMC K(15M) 10% as skeleton material, MCC 31% as filler, PEG6000 2% as porogen and magnesium stearate 2% as lubricant. The sustained release tablets released up to 80% in 8 h. The zero order equation, primary equation and Higuchi equation could simulate the release characteristics of sustained release tablets in vitro, the correlation coefficients r were larger than 0.96. The primary equation was most similar in vitro release characteristics and its correlation coefficient r was 0.9950. The preparation method is simple and the results of formulation selection are reliable. It can be used to guide the production of Epimedium component sustained release preparations.
NASA Technical Reports Server (NTRS)
Hidalgo, Homero, Jr.
2000-01-01
An innovative methodology for determining structural target mode selection and mode selection based on a specific criterion is presented. An effective approach to single out modes which interact with specific locations on a structure has been developed for the X-33 Launch Vehicle Finite Element Model (FEM). We presented Root-Sum-Square (RSS) displacement method computes resultant modal displacement for each mode at selected degrees of freedom (DOF) and sorts to locate modes with highest values. This method was used to determine modes, which most influenced specific locations/points on the X-33 flight vehicle such as avionics control components, aero-surface control actuators, propellant valve and engine points for use in flight control stability analysis and for flight POGO stability analysis. Additionally, the modal RSS method allows for primary or global target vehicle modes to also be identified in an accurate and efficient manner.
Ion beam figuring of small optical components
NASA Astrophysics Data System (ADS)
Drueding, Thomas W.; Fawcett, Steven C.; Wilson, Scott R.; Bifano, Thomas G.
1995-12-01
Ion beam figuring provides a highly deterministic method for the final precision figuring of optical components with advantages over conventional methods. The process involves bombarding a component with a stable beam of accelerated particles that selectively removes material from the surface. Figure corrections are achieved by rastering the fixed-current beam across the workplace at appropriate, time-varying velocities. Unlike conventional methods, ion figuring is a noncontact technique and thus avoids such problems as edge rolloff effects, tool wear, and force loading of the workpiece. This work is directed toward the development of the precision ion machining system at NASA's Marshall Space Flight Center. This system is designed for processing small (approximately equals 10-cm diam) optical components. Initial experiments were successful in figuring 8-cm-diam fused silica and chemical-vapor-deposited SiC samples. The experiments, procedures, and results of figuring the sample workpieces to shallow spherical, parabolic (concave and convex), and non-axially-symmetric shapes are discussed. Several difficulties and limitations encountered with the current system are discussed. The use of a 1-cm aperture for making finer corrections on optical components is also reported.
Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
1990-01-01
A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.
Probabilistic structural analysis methods for improving Space Shuttle engine reliability
NASA Technical Reports Server (NTRS)
Boyce, L.
1989-01-01
Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.
Microbially-mediated method for synthesis of non-oxide semiconductor nanoparticles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phelps, Tommy J.; Lauf, Robert J.; Moon, Ji-Won
The invention is directed to a method for producing non-oxide semiconductor nanoparticles, the method comprising: (a) subjecting a combination of reaction components to conditions conducive to microbially-mediated formation of non-oxide semiconductor nanoparticles, wherein said combination of reaction components comprises i) anaerobic microbes, ii) a culture medium suitable for sustaining said anaerobic microbes, iii) a metal component comprising at least one type of metal ion, iv) a non-metal component comprising at least one non-metal selected from the group consisting of S, Se, Te, and As, and v) one or more electron donors that provide donatable electrons to said anaerobic microbes duringmore » consumption of the electron donor by said anaerobic microbes; and (b) isolating said non-oxide semiconductor nanoparticles, which contain at least one of said metal ions and at least one of said non-metals. The invention is also directed to non-oxide semiconductor nanoparticle compositions produced as above and having distinctive properties.« less
Abdelrahman, Maha M; Naguib, Ibrahim A; El Ghobashy, Mohamed R; Ali, Nesma A
2015-02-25
Four simple, sensitive and selective spectrophotometric methods are presented for determination of Zopiclone (ZPC) and its impurity, one of its degradation products, namely; 2-amino-5-chloropyridine (ACP). Method A is a dual wavelength spectrophotometry; where two wavelengths (252 and 301 nm for ZPC, and 238 and 261 nm for ACP) were selected for each component in such a way that difference in absorbance is zero for the second one. Method B is isoabsorptive ratio method by combining the isoabsorptive point (259.8 nm) in the ratio spectrum using ACP as a divisor and the ratio difference for a single step determination of both components. Method C is third derivative (D(3)) spectrophotometric method which allows determination of both ZPC at 283.6 nm and ACP at 251.6 nm without interference of each other. Method D is based on measuring the peak amplitude of the first derivative of the ratio spectra (DD(1)) at 263.2 nm for ZPC and 252 nm for ACP. The suggested methods were validated according to ICH guidelines and can be applied for routine analysis in quality control laboratories. Statistical analysis of the results obtained from the proposed methods and those obtained from the reported method has been carried out revealing high accuracy and good precision. Copyright © 2014 Elsevier B.V. All rights reserved.
Thermal Signature Identification System (TheSIS)
NASA Technical Reports Server (NTRS)
Merritt, Scott; Bean, Brian
2015-01-01
We characterize both nonlinear and high order linear responses of fiber-optic and optoelectronic components using spread spectrum temperature cycling methods. This Thermal Signature Identification System (TheSIS) provides much more detail than conventional narrowband or quasi-static temperature profiling methods. This detail allows us to match components more thoroughly, detect subtle reversible shifts in performance, and investigate the cause of instabilities or irreversible changes. In particular, we create parameterized models of athermal fiber Bragg gratings (FBGs), delay line interferometers (DLIs), and distributed feedback (DFB) lasers, then subject the alternative models to selection via the Akaike Information Criterion (AIC). Detailed pairing of components, e.g. FBGs, is accomplished by means of weighted distance metrics or norms, rather than on the basis of a single parameter, such as center wavelength.
An adaptive data-driven method for accurate prediction of remaining useful life of rolling bearings
NASA Astrophysics Data System (ADS)
Peng, Yanfeng; Cheng, Junsheng; Liu, Yanfei; Li, Xuejun; Peng, Zhihua
2018-06-01
A novel data-driven method based on Gaussian mixture model (GMM) and distance evaluation technique (DET) is proposed to predict the remaining useful life (RUL) of rolling bearings. The data sets are clustered by GMM to divide all data sets into several health states adaptively and reasonably. The number of clusters is determined by the minimum description length principle. Thus, either the health state of the data sets or the number of the states is obtained automatically. Meanwhile, the abnormal data sets can be recognized during the clustering process and removed from the training data sets. After obtaining the health states, appropriate features are selected by DET for increasing the classification and prediction accuracy. In the prediction process, each vibration signal is decomposed into several components by empirical mode decomposition. Some common statistical parameters of the components are calculated first and then the features are clustered using GMM to divide the data sets into several health states and remove the abnormal data sets. Thereafter, appropriate statistical parameters of the generated components are selected using DET. Finally, least squares support vector machine is utilized to predict the RUL of rolling bearings. Experimental results indicate that the proposed method reliably predicts the RUL of rolling bearings.
Network immunization under limited budget using graph spectra
NASA Astrophysics Data System (ADS)
Zahedi, R.; Khansari, M.
2016-03-01
In this paper, we propose a new algorithm that minimizes the worst expected growth of an epidemic by reducing the size of the largest connected component (LCC) of the underlying contact network. The proposed algorithm is applicable to any level of available resources and, despite the greedy approaches of most immunization strategies, selects nodes simultaneously. In each iteration, the proposed method partitions the LCC into two groups. These are the best candidates for communities in that component, and the available resources are sufficient to separate them. Using Laplacian spectral partitioning, the proposed method performs community detection inference with a time complexity that rivals that of the best previous methods. Experiments show that our method outperforms targeted immunization approaches in both real and synthetic networks.
Yi, YaXiong; Zhang, Yong; Ding, Yue; Lu, Lu; Zhang, Tong; Zhao, Yuan; Xu, XiaoJun; Zhang, YuXin
2016-11-01
We developed a novel quantitative analysis method based on ultra high performance liquid chromatography coupled with diode array detection for the simultaneous determination of the 14 main active components in Yinchenhao decoction. All components were separated on an Agilent SB-C18 column by using a gradient solvent system of acetonitrile/0.1% phosphoric acid solution at a flow rate of 0.4 mL/min for 35 min. Subsequently, linearity, precision, repeatability, and accuracy tests were implemented to validate the method. Furthermore, the method has been applied for compositional difference analysis of 14 components in eight normal-extraction Yinchenhao decoction samples, accompanied by hierarchical clustering analysis and similarity analysis. The result that all samples were divided into three groups based on different contents of components demonstrated that extraction methods of decocting, refluxing, ultrasonication and extraction solvents of water or ethanol affected component differentiation, and should be related to its clinical applications. The results also indicated that the sample prepared by patients in the family by using water extraction employing a casserole was almost same to that prepared using a stainless-steel kettle, which is mostly used in pharmaceutical factories. This research would help patients to select the best and most convenient method for preparing Yinchenhao decoction. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Susceptor heating device for electron beam brazing
Antieau, Susan M.; Johnson, Robert G. R.
1999-01-01
A brazing device and method are provided which locally apply a controlled amount of heat to a selected area, within a vacuum. The device brazes two components together with a brazing metal. A susceptor plate is placed in thermal contact with one of the components. A serrated pedestal supports the susceptor plate. When the pedestal and susceptor plate are in place, an electron gun irradiates an electron beam at the susceptor plate such that the susceptor plate is sufficiently heated to transfer heat through the one component and melt the brazing metal.
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data
Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data.
Xu, Lizhen; Paterson, Andrew D; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects.
Method for enhancing signals transmitted over optical fibers
Ogle, J.W.; Lyons, P.B.
1981-02-11
A method for spectral equalization of high frequency spectrally broadband signals transmitted through an optical fiber is disclosed. The broadband signal input is first dispersed by a grating. Narrow spectral components are collected into an array of equalizing fibers. The fibers serve as optical delay lines compensating for material dispersion of each spectral component during transmission. The relative lengths of the individual equalizing fibers are selected to compensate for such prior dispersion. The output of the equalizing fibers couple the spectrally equalized light onto a suitable detector for subsequent electronic processing of the enhanced broadband signal.
Antithrombogenic and antibiotic composition and methods of preparation thereof
Hermes, R.E.
1990-04-17
Antithrombogenic and antibiotic composition of matter and method of preparation are disclosed. A random copolymer of a component of garlic and a biocompatible polymer has been prepared and found to exhibit antithrombogenic and antibiotic properties. Polymerization occurs selectively at the vinyl moiety in 2-vinyl-4H-1,3-dithiin when copolymerized with N-vinyl pyrrolidone. 4 figs.
Antithrombogenic and antibiotic composition and methods of preparation thereof
Hermes, Robert E.
1990-01-01
Antithrombogenic and antibiotic composition of matter and method of preparation thereof. A random copolymer of a component of garlic and a biocompatible polymer has been prepared and found to exhibit antithrombogenic and antibiotic properties. Polymerization occurs selectively at the vinyl moiety in 2-vinyl-4H-1,3-dithiin when copolymerized with N-vinyl pyrrolidone.
NASA Technical Reports Server (NTRS)
Hall, A. Daniel (Inventor); Davies, Francis J. (Inventor)
2007-01-01
Method and system are disclosed for determining individual string resistance in a network of strings when the current through a parallel connected string is unknown and when the voltage across a series connected string is unknown. The method/system of the invention involves connecting one or more frequency-varying impedance components with known electrical characteristics to each string and applying a frequency-varying input signal to the network of strings. The frequency-varying impedance components may be one or more capacitors, inductors, or both, and are selected so that each string is uniquely identifiable in the output signal resulting from the frequency-varying input signal. Numerical methods, such as non-linear regression, may then be used to resolve the resistance associated with each string.
Zhao, Ying-Yong; Zhao, Ye; Zhang, Yong-Min; Lin, Rui-Chao; Sun, Wen-Ji
2009-06-01
Polyporus umbellatus is a widely used anti-aldosteronic diuretic in Traditional Chinese medicine (TCM). A new, sensitive and selective high-performance liquid chromatography-fluorescence detector (HPLC-FLD) and high-performance liquid chromatography-atmospheric pressure chemical ionization-mass spectrometry (HPLC-APCI-MS/MS) method for quantitative and qualitative determination of ergosta-4,6,8(14),22-tetraen-3-one(ergone), which is the main diuretic component, was provided for quality control of P. umbellatus crude drug. The ergone in the ethanolic extract of P. umbellatus was unambiguously characterized by HPLC-APCI, and further confirmed by comparing with a standard compound. The trace ergone was detected by the sensitive and selective HPLC-FLD. Linearity (r2 > 0.9998) and recoveries of low, medium and high concentration (100.5%, 100.2% and 100.4%) were consistent with the experimental criteria. The limit of detection (LOD) of ergone was around 0.2 microg/mL. Our results indicated that the content of ergone in P. umbellatus varied significantly from habitat to habitat with contents ranging from 2.13 +/- 0.02 to 59.17 +/- 0.05 microg/g. Comparison among HPLC-FLD and HPLC-UV or HPLC-APCI-MS/MS demonstrated that the HPLC-FLD and HPLC-APCI-MS/MS methods gave similar quantitative results for the selected herb samples, the HPLC-UV methods gave lower quantitative results than HPLC-FLD and HPLC-APCI-MS/MS methods. The established new HPLC-FLD method has the advantages of being rapid, simple, selective and sensitive, and could be used for the routine analysis of P. umbellatus crude drug.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szweda, A.
2001-01-01
The Department of Energy's Continuous Fiber Ceramic Composites (CFCC) Initiative that begun in 1992 has led the way for Industry, Academia, and Government to carry out a 10 year R&D plan to develop CFCCs for these industrial applications. In Phase II of this program, Dow Corning has led a team of OEM's, composite fabricators, and Government Laboratories to develop polymer derived CFCC materials and processes for selected industrial applications. During this phase, Dow Corning carried extensive process development and representative component demonstration activities on gas turbine components, chemical pump components and heat treatment furnace components.
Controlling Cargo Trafficking in Multicomponent Membranes.
Curk, Tine; Wirnsberger, Peter; Dobnikar, Jure; Frenkel, Daan; Šarić, Anđela
2018-04-27
Biological membranes typically contain a large number of different components dispersed in small concentrations in the main membrane phase, including proteins, sugars, and lipids of varying geometrical properties. Most of these components do not bind the cargo. Here, we show that such "inert" components can be crucial for the precise control of cross-membrane trafficking. Using a statistical mechanics model and molecular dynamics simulations, we demonstrate that the presence of inert membrane components of small isotropic curvatures dramatically influences cargo endocytosis, even if the total spontaneous curvature of such a membrane remains unchanged. Curved lipids, such as cholesterol, as well as asymmetrically included proteins and tethered sugars can, therefore, actively participate in the control of the membrane trafficking of nanoscopic cargo. We find that even a low-level expression of curved inert membrane components can determine the membrane selectivity toward the cargo size and can be used to selectively target membranes of certain compositions. Our results suggest a robust and general method of controlling cargo trafficking by adjusting the membrane composition without needing to alter the concentration of receptors or the average membrane curvature. This study indicates that cells can prepare for any trafficking event by incorporating curved inert components in either of the membrane leaflets.
Gao, Jie; Xue, Jun-Fa; Xu, Meng; Gui, Bao-Song; Wang, Feng-Xin; Ouyang, Jian-Ming
2014-01-01
Purpose This study aimed to accurately analyze the relationship between calcium oxalate (CaOx) stone formation and the components of urinary nanocrystallites. Method High-resolution transmission electron microscopy (HRTEM), selected area electron diffraction, fast Fourier transformation of HRTEM, and energy dispersive X-ray spectroscopy were performed to analyze the components of these nanocrystallites. Results The main components of CaOx stones are calcium oxalate monohydrate and a small amount of dehydrate, while those of urinary nanocrystallites are calcium oxalate monohydrate, uric acid, and calcium phosphate. The mechanism of formation of CaOx stones was discussed based on the components of urinary nanocrystallites. Conclusion The formation of CaOx stones is closely related both to the properties of urinary nanocrystallites and to the urinary components. The combination of HRTEM, fast Fourier transformation, selected area electron diffraction, and energy dispersive X-ray spectroscopy could be accurately performed to analyze the components of single urinary nanocrystallites. This result provides evidence for nanouric acid and/or nanocalcium phosphate crystallites as the central nidus to induce CaOx stone formation. PMID:25258530
Togashi, K; Hagiya, K; Osawa, T; Nakanishi, T; Yamazaki, T; Nagamine, Y; Lin, C Y; Matsumoto, S; Aihara, M; Hayasaka, K
2012-08-01
We first sought to clarify the effects of discounted rate, survival rate, and lactation persistency as a component trait of the selection index on net merit, defined as the first five lactation milks and herd life (HL) weighted by 1 and 0.389 (currently used in Japan), respectively, in units of genetic standard deviation. Survival rate increased the relative economic importance of later lactation traits and the first five lactation milk yields during the first 120 months from the start of the breeding scheme. In contrast, reliabilities of the estimated breeding value (EBV) in later lactation traits are lower than those of earlier lactation traits. We then sought to clarify the effects of applying single nucleotide polymorphism (SNP) on net merit to improve the reliability of EBV of later lactation traits to maximize their increased economic importance due to increase in survival rate. Net merit, selection accuracy, and HL increased by adding lactation persistency to the selection index whose component traits were only milk yields. Lactation persistency of the second and (especially) third parities contributed to increasing HL while maintaining the first five lactation milk yields compared with the selection index whose only component traits were milk yields. A selection index comprising the first three lactation milk yields and persistency accounted for 99.4% of net merit derived from a selection index whose components were identical to those for net merit. We consider that the selection index comprising the first three lactation milk yields and persistency is a practical method for increasing lifetime milk yield in the absence of data regarding HL. Applying SNP to the second- and third-lactation traits and HL increased net merit and HL by maximizing the increased economic importance of later lactation traits, reducing the effect of first-lactation milk yield on HL (genetic correlation (rG) = -0.006), and by augmenting the effects of the second- and third-lactation milk yields on HL (rG = 0.118 and 0.257, respectively).
Pei, Yan-Ling; Wu, Zhi-Sheng; Shi, Xin-Yuan; Zhou, Lu-Wei; Qiao, Yan-Jiang
2014-09-01
The present paper firstly reviewed the research progress and main methods of NIR spectral assignment coupled with our research results. Principal component analysis was focused on characteristic signal extraction to reflect spectral differences. Partial least squares method was concerned with variable selection to discover characteristic absorption band. Two-dimensional correlation spectroscopy was mainly adopted for spectral assignment. Autocorrelation peaks were obtained from spectral changes, which were disturbed by external factors, such as concentration, temperature and pressure. Density functional theory was used to calculate energy from substance structure to establish the relationship between molecular energy and spectra change. Based on the above reviewed method, taking a NIR spectral assignment of chlorogenic acid as example, a reliable spectral assignment for critical quality attributes of Chinese materia medica (CMM) was established using deuterium technology and spectral variable selection. The result demonstrated the assignment consistency according to spectral features of different concentrations of chlorogenic acid and variable selection region of online NIR model in extract process. Although spectral assignment was initial using an active pharmaceutical ingredient, it is meaningful to look forward to the futurity of the complex components in CMM. Therefore, it provided methodology for NIR spectral assignment of critical quality attributes in CMM.
LEPORE, MICHAEL J.; SHIELD, RENÉE R.; LOOZE, JESSICA; TYLER, DENISE; MOR, VINCENT; MILLER, SUSAN C.
2016-01-01
Components of nursing home (NH) culture change include resident-centeredness, empowerment, and home likeness, but practices reflective of these components may be found in both traditional and “culture change” NHs. We use mixed methods to examine the presence of culture change practices in the context of an NH’s payer sources. Qualitative data show how higher pay from Medicare versus Medicaid influences implementation of select culture change practices, and quantitative data show NHs with higher proportions of Medicare residents have significantly higher (measured) environmental culture change implementation. Findings indicate that heightened coordination of Medicare and Medicaid could influence NH implementation of reform practices. PMID:25941947
Lepore, Michael J; Shield, Renée R; Looze, Jessica; Tyler, Denise; Mor, Vincent; Miller, Susan C
2015-01-01
Components of nursing home (NH) culture change include resident-centeredness, empowerment, and home likeness, but practices reflective of these components may be found in both traditional and "culture change" NHs. We use mixed methods to examine the presence of culture change practices in the context of an NH's payer sources. Qualitative data show how higher pay from Medicare versus Medicaid influences implementation of select culture change practices, and quantitative data show NHs with higher proportions of Medicare residents have significantly higher (measured) environmental culture change implementation. Findings indicate that heightened coordination of Medicare and Medicaid could influence NH implementation of reform practices.
Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun
2016-01-01
The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods. PMID:27258276
Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun
2016-05-31
The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.
Combining Mixture Components for Clustering*
Baudry, Jean-Patrick; Raftery, Adrian E.; Celeux, Gilles; Lo, Kenneth; Gottardo, Raphaël
2010-01-01
Model-based clustering consists of fitting a mixture model to data and identifying each cluster with one of its components. Multivariate normal distributions are typically used. The number of clusters is usually determined from the data, often using BIC. In practice, however, individual clusters can be poorly fitted by Gaussian distributions, and in that case model-based clustering tends to represent one non-Gaussian cluster by a mixture of two or more Gaussian distributions. If the number of mixture components is interpreted as the number of clusters, this can lead to overestimation of the number of clusters. This is because BIC selects the number of mixture components needed to provide a good approximation to the density, rather than the number of clusters as such. We propose first selecting the total number of Gaussian mixture components, K, using BIC and then combining them hierarchically according to an entropy criterion. This yields a unique soft clustering for each number of clusters less than or equal to K. These clusterings can be compared on substantive grounds, and we also describe an automatic way of selecting the number of clusters via a piecewise linear regression fit to the rescaled entropy plot. We illustrate the method with simulated data and a flow cytometry dataset. Supplemental Materials are available on the journal Web site and described at the end of the paper. PMID:20953302
NASA Astrophysics Data System (ADS)
Fayez, Yasmin Mohammed; Tawakkol, Shereen Mostafa; Fahmy, Nesma Mahmoud; Lotfy, Hayam Mahmoud; Shehata, Mostafa Abdel-Aty
2018-04-01
Three methods of analysis are conducted that need computational procedures by the Matlab® software. The first is the univariate mean centering method which eliminates the interfering signal of the one component at a selected wave length leaving the amplitude measured to represent the component of interest only. The other two multivariate methods named PLS and PCR depend on a large number of variables that lead to extraction of the maximum amount of information required to determine the component of interest in the presence of the other. Good accurate and precise results are obtained from the three methods for determining clotrimazole in the linearity range 1-12 μg/mL and 75-550 μg/mL with dexamethasone acetate 2-20 μg/mL in synthetic mixtures and pharmaceutical formulation using two different spectral regions 205-240 nm and 233-278 nm. The results obtained are compared statistically to each other and to the official methods.
Dimensionality Reduction Through Classifier Ensembles
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Tumer, Kagan; Norwig, Peter (Technical Monitor)
1999-01-01
In data mining, one often needs to analyze datasets with a very large number of attributes. Performing machine learning directly on such data sets is often impractical because of extensive run times, excessive complexity of the fitted model (often leading to overfitting), and the well-known "curse of dimensionality." In practice, to avoid such problems, feature selection and/or extraction are often used to reduce data dimensionality prior to the learning step. However, existing feature selection/extraction algorithms either evaluate features by their effectiveness across the entire data set or simply disregard class information altogether (e.g., principal component analysis). Furthermore, feature extraction algorithms such as principal components analysis create new features that are often meaningless to human users. In this article, we present input decimation, a method that provides "feature subsets" that are selected for their ability to discriminate among the classes. These features are subsequently used in ensembles of classifiers, yielding results superior to single classifiers, ensembles that use the full set of features, and ensembles based on principal component analysis on both real and synthetic datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szoka de Valladares, M.R.; Mack, S.
The DOE Hydrogen Program needs to develop criteria as part of a systematic evaluation process for proposal identification, evaluation and selection. The H Scan component of this process provides a framework in which a project proposer can fully describe their candidate technology system and its components. The H Scan complements traditional methods of capturing cost and technical information. It consists of a special set of survey forms designed to elicit information so expert reviewers can assess the proposal relative to DOE specified selection criteria. The Analytic Hierarchy Process (AHP) component of the decision process assembles the management defined evaluation andmore » selection criteria into a coherent multi-level decision construct by which projects can be evaluated in pair-wise comparisons. The AHP model will reflect management`s objectives and it will assist in the ranking of individual projects based on the extent to which each contributes to management`s objectives. This paper contains a detailed description of the products and activities associated with the planning and evaluation process: The objectives or criteria; the H Scan; and The Analytic Hierarchy Process (AHP).« less
Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin
2016-01-01
Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The "high quality" training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system.
Li, Yang; Cui, Weigang; Luo, Meilin; Li, Ke; Wang, Lina
2018-01-25
The electroencephalogram (EEG) signal analysis is a valuable tool in the evaluation of neurological disorders, which is commonly used for the diagnosis of epileptic seizures. This paper presents a novel automatic EEG signal classification method for epileptic seizure detection. The proposed method first employs a continuous wavelet transform (CWT) method for obtaining the time-frequency images (TFI) of EEG signals. The processed EEG signals are then decomposed into five sub-band frequency components of clinical interest since these sub-band frequency components indicate much better discriminative characteristics. Both Gaussian Mixture Model (GMM) features and Gray Level Co-occurrence Matrix (GLCM) descriptors are then extracted from these sub-band TFI. Additionally, in order to improve classification accuracy, a compact feature selection method by combining the ReliefF and the support vector machine-based recursive feature elimination (RFE-SVM) algorithm is adopted to select the most discriminative feature subset, which is an input to the SVM with the radial basis function (RBF) for classifying epileptic seizure EEG signals. The experimental results from a publicly available benchmark database demonstrate that the proposed approach provides better classification accuracy than the recently proposed methods in the literature, indicating the effectiveness of the proposed method in the detection of epileptic seizures.
Igne, Benoît; de Juan, Anna; Jaumot, Joaquim; Lallemand, Jordane; Preys, Sébastien; Drennen, James K; Anderson, Carl A
2014-10-01
The implementation of a blend monitoring and control method based on a process analytical technology such as near infrared spectroscopy requires the selection and optimization of numerous criteria that will affect the monitoring outputs and expected blend end-point. Using a five component formulation, the present article contrasts the modeling strategies and end-point determination of a traditional quantitative method based on the prediction of the blend parameters employing partial least-squares regression with a qualitative strategy based on principal component analysis and Hotelling's T(2) and residual distance to the model, called Prototype. The possibility to monitor and control blend homogeneity with multivariate curve resolution was also assessed. The implementation of the above methods in the presence of designed experiments (with variation of the amount of active ingredient and excipients) and with normal operating condition samples (nominal concentrations of the active ingredient and excipients) was tested. The impact of criteria used to stop the blends (related to precision and/or accuracy) was assessed. Results demonstrated that while all methods showed similarities in their outputs, some approaches were preferred for decision making. The selectivity of regression based methods was also contrasted with the capacity of qualitative methods to determine the homogeneity of the entire formulation. Copyright © 2014. Published by Elsevier B.V.
Reusable Software Component Retrieval via Normalized Algebraic Specifications
1991-12-01
outputs. In fact, this method of query is simpler for matching since it relieves the system from the burden of generating a test set. Eichmann [Eich9l...September 1991. [Eich9l] Eichmann , David A., "Selecting Reusable Components Using Algebraic Specifications", Proceedings of the Second International...Technology Atlanta, Georgia 30332-0800 12. Dr. David Eichmann 1 Department of Statistics and Computer Science Knapp Hall West Virginia University Morgantown, West Virginia 26506 226
Cold weather hydrogen generation system and method of operation
Dreier, Ken Wayne; Kowalski, Michael Thomas; Porter, Stephen Charles; Chow, Oscar Ken; Borland, Nicholas Paul; Goyette, Stephen Arthur
2010-12-14
A system for providing hydrogen gas is provided. The system includes a hydrogen generator that produces gas from water. One or more heat generation devices are arranged to provide heating of the enclosure during different modes of operation to prevent freezing of components. A plurality of temperature sensors are arranged and coupled to a controller to selectively activate a heat source if the temperature of the component is less than a predetermined temperature.
Diversity pattern in Sesamum mutants selected for a semi-arid cropping system.
Murty, B R; Oropeza, F
1989-02-01
Due to the complex requirements of moisture stress, substantial genetic diversity with a wide array of character combinations and effective simultaneous selection for several variables is necessary for improving the productivity and adaptation of a component crop in order for it to fit into a cropping system under semi-arid tropical conditions. Sesamum indicum L. is grown in Venezuela after rice/sorghum/or maize under such conditions. A mutation breeding program was undertaken using six locally adapted varieties to develop genotypes suitable for the above system. The diversity pattern for nine variables was assessed by multivariate analysis in 301 M4 progenies. Analysis of the characteristic roots and principal components in three methods of selection, i.e., M2 bulks (A), individual plant selection throughout (B), and selection in M3 for single variable (C), revealed differences in the pattern of variation between varieties, selection methods, and varieties x methods interactions. Method B was superior to the others and gave 17 of the 21 best M5 progenies. 'Piritu' and 'CF' varieties yielded the most productive progenies in M5 and M6. Diversity was large and selection was effective for such developmental traits as earliness and synchrony, combined with multiple disease resistance, which could be related to their importance by multivariate analyses. Considerable differences in the variety of character combinations among the high yielding. M5 progenies of 'CF' and 'Piritu' suggested possible further yield improvement. The superior response of 'Piritu' and 'CF' over other varieties in yield and adaptation was due to major changes in plant type and character associations. Multilocation testing of M5 generations revealed that the mutant progenies had a 40%-100% yield superiority over the parents; this was combined with earliness, synchrony, and multiple disease resistance, and was confirmed in the M6 generation grown on a commercial scale. This study showed that multivariate analysis is an effective tool for assessing diversity patterns, choice of appropriate variety, and selection methodology in order to make rapid progress in meeting the complex requirements of semi-arid cropping systems.
NASA Astrophysics Data System (ADS)
Chen, Huaiyu; Cao, Li
2017-06-01
In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.
Dascălu, Cristina Gena; Antohe, Magda Ecaterina
2009-01-01
Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.
Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension
NASA Astrophysics Data System (ADS)
Yan, Z.; Luan, X.
2017-12-01
Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.
Jiulong Xie; Chung Hse; Todd F. Shupe; Hui Pan; Tingxing Hu
2016-01-01
Microwave-assisted selective liquefaction was proposed and used as a novel method for the isolation of holocellulose fibers. The results showed that the bamboo lignin component and extractives were almost completely removed by using a liquefaction process at 120 8C for 9 min, and the residual lignin and extractives in the solid residue were as low as 0.65% and 0.49%,...
High Bypass Turbofan Component Development. Phase II. Detailed Design.
1979-08-01
Selecting Blade Thickness for Bird Strike 46 27. Method for Selecting Blade Airfoil Attachment 49 AIRCRAF ENGINE GROUP IV GENERAL ELECTRIC COMPANY...reserves, the replacement aircraft must have a fuel efficient engine as the propulsion system, i. e., * modern turbofan engine . Technology in the large... turbofan engines has been well demonstrated, but little has been done in the size applicable to a twin- engine primary trainer aircraft . Today, there is
Optical data transmission technology for fixed and drag-on STS payload umbilicals, volume 2
NASA Technical Reports Server (NTRS)
St.denis, R. W.
1981-01-01
Optical data handling methods are studied as applicable to payload communications checkout and monitoring. Both payload umbilicals and interconnecting communication lines carrying payload data are examined for the following: (1) ground checkout requirements; (2) optical approach (technical survey of optical approaches, selection of optimum approach); (3) survey and select components; (4) compare with conventional approach; and (5) definition of follow on activity.
Liu, Shuqiang; Tan, Zhibin; Li, Pingting; Gao, Xiaoling; Zeng, Yuaner; Wang, Shuling
2016-03-20
HepG2 cells biospecific extraction method and high performance liquid chromatography-electrospray ionization-mass spectrometry (HPLC-ESI-MS) analysis was proposed for screening of potential antiatherosclerotic active components in Bupeuri radix, a well-known Traditional Chinese Medicine (TCM). The hypothesis suggested that when cells are incubated together with the extracts of TCM, the potential bioactive components in the TCM should selectively combine with the receptor or channel of HepG2 cells, then the eluate which contained biospecific component binding to HepG2 cells was identified using HPLC-ESI-MS analysis. The potential bioactive components of Bupeuri radix were investigated using the proposed approach. Five compounds in the saikosaponins of Bupeuri radix were detected as these components selectively combined with HepG2 cells, among these compounds, two potentially bioactive compounds namely saikosaponin b1 and saikosaponin b2 (SSb2) were identified by comparing with the chromatography of the standard sample and analysis of the structural clearance characterization of MS. Then SSb2 was used to assess the uptake of DiI-high density lipoprotein (HDL) in HepG2 cells for antiatherosclerotic activity. The results have showed that SSb2, with indicated concentrations (5, 15, 25, and 40 μM) could remarkably uptake dioctadecylindocarbocyanine labeled- (DiI) -HDL in HepG2 cells (Vs control group, *P<0.01). In conclusion, the application of HepG2 biospecific extraction coupled with HPLC-ESI-MS analysis is a rapid, convenient, and reliable method for screening potential bioactive components in TCM and SSb2 may be a valuable novel drug agent for the treatment of atherosclerosis. Copyright © 2016 Elsevier B.V. All rights reserved.
Ding, Shujing; Dudley, Ed; Chen, Lijuan; Plummer, Sue; Tang, Jiandong; Newton, Russell P; Brenton, A Gareth
2006-01-01
Ginkgo biloba is one of the most popular herbal nutritional supplements, with terpene lactones and flavonoids being the two major active components. An on-line purification high-performance liquid chromatography/mass spectrometry (HPLC/MS) method was successfully developed for the quantitative determination of flavonoids and terpene lactones excreted in human urine after ingesting the herbal supplement. Satisfactory separation was obtained using a C18 capillary column made in-house with sample clean-up and pre-concentration achieved using a C18 pre-column with column switching. High selectivity and limits of detection of 1-18 ng/mL were achieved using a selected ion monitoring (SIM) scan in negative ion mode; the on-line solid-phase extraction (SPE) recovery of the active components in Ginkgo biloba determined in this study was greater than 75%. Copyright 2006 John Wiley & Sons, Ltd.
Antipsychotics, Lithium, Benzodiazepines, Beta-Blockers.
ERIC Educational Resources Information Center
Karper, Laurence P.; And Others
1994-01-01
The psychopharmacologic treatment of aggression is a critical component of the treatment of psychiatric patients. The diagnostic assessment of aggressive patients is reviewed and relevant literature is presented to help clinicians select appropriate medication. Side-effects, dosages, and methods of administration are highlighted. (JPS)
Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach
Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio
2015-01-01
This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043
Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach.
Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio
2015-01-01
This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447-2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8-30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics.
Compressive strength of human openwedges: a selection method
NASA Astrophysics Data System (ADS)
Follet, H.; Gotteland, M.; Bardonnet, R.; Sfarghiu, A. M.; Peyrot, J.; Rumelhart, C.
2004-02-01
A series of 44 samples of bone wedges of human origin, intended for allograft openwedge osteotomy and obtained without particular precautions during hip arthroplasty were re-examined. After viral inactivity chemical treatment, lyophilisation and radio-sterilisation (intended to produce optimal health safety), the compressive strength, independent of age, sex and the height of the sample (or angle of cut), proved to be too widely dispersed [ 10{-}158 MPa] in the first study. We propose a method for selecting samples which takes into account their geometry (width, length, thicknesses, cortical surface area). Statistical methods (Principal Components Analysis PCA, Hierarchical Cluster Analysis, Multilinear regression) allowed final selection of 29 samples having a mean compressive strength σ_{max} =103 MPa ± 26 and with variation [ 61{-}158 MPa] . These results are equivalent or greater than average materials currently used in openwedge osteotomy.
Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition
Norman-Haignere, Sam
2015-01-01
SUMMARY The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles (“components”) whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech. PMID:26687225
Discrimination of serum Raman spectroscopy between normal and colorectal cancer
NASA Astrophysics Data System (ADS)
Li, Xiaozhou; Yang, Tianyue; Yu, Ting; Li, Siqi
2011-07-01
Raman spectroscopy of tissues has been widely studied for the diagnosis of various cancers, but biofluids were seldom used as the analyte because of the low concentration. Herein, serum of 30 normal people, 46 colon cancer, and 44 rectum cancer patients were measured Raman spectra and analyzed. The information of Raman peaks (intensity and width) and that of the fluorescence background (baseline function coefficients) were selected as parameters for statistical analysis. Principal component regression (PCR) and partial least square regression (PLSR) were used on the selected parameters separately to see the performance of the parameters. PCR performed better than PLSR in our spectral data. Then linear discriminant analysis (LDA) was used on the principal components (PCs) of the two regression method on the selected parameters, and a diagnostic accuracy of 88% and 83% were obtained. The conclusion is that the selected features can maintain the information of original spectra well and Raman spectroscopy of serum has the potential for the diagnosis of colorectal cancer.
Brenner, Stephan; Muula, Adamson S; Robyn, Paul Jacob; Bärnighausen, Till; Sarker, Malabika; Mathanga, Don P; Bossert, Thomas; De Allegri, Manuela
2014-04-22
In this article we present a study design to evaluate the causal impact of providing supply-side performance-based financing incentives in combination with a demand-side cash transfer component on equitable access to and quality of maternal and neonatal healthcare services. This intervention is introduced to selected emergency obstetric care facilities and catchment area populations in four districts in Malawi. We here describe and discuss our study protocol with regard to the research aims, the local implementation context, and our rationale for selecting a mixed methods explanatory design with a quasi-experimental quantitative component. The quantitative research component consists of a controlled pre- and post-test design with multiple post-test measurements. This allows us to quantitatively measure 'equitable access to healthcare services' at the community level and 'healthcare quality' at the health facility level. Guided by a theoretical framework of causal relationships, we determined a number of input, process, and output indicators to evaluate both intended and unintended effects of the intervention. Overall causal impact estimates will result from a difference-in-difference analysis comparing selected indicators across intervention and control facilities/catchment populations over time.To further explain heterogeneity of quantitatively observed effects and to understand the experiential dimensions of financial incentives on clients and providers, we designed a qualitative component in line with the overall explanatory mixed methods approach. This component consists of in-depth interviews and focus group discussions with providers, service user, non-users, and policy stakeholders. In this explanatory design comprehensive understanding of expected and unexpected effects of the intervention on both access and quality will emerge through careful triangulation at two levels: across multiple quantitative elements and across quantitative and qualitative elements. Combining a traditional quasi-experimental controlled pre- and post-test design with an explanatory mixed methods model permits an additional assessment of organizational and behavioral changes affecting complex processes. Through this impact evaluation approach, our design will not only create robust evidence measures for the outcome of interest, but also generate insights on how and why the investigated interventions produce certain intended and unintended effects and allows for a more in-depth evaluation approach.
NASA Astrophysics Data System (ADS)
Matuszak, Zbigniew; Bartosz, Michał; Barta, Dalibor
2016-09-01
In the article are characterized two network methods (critical path method - CPM and program evaluation and review technique - PERT). On the example of an international furniture company's product, it presented the exemplification of methods to transport cargos (furniture elements). Moreover, the study showed diagrams for transportation of cargos from individual components' producers to the final destination - the showroom. Calculations were based on the transportation of furniture elements via small commercial vehicles.
Dawidowicz, Andrzej L; Czapczyńska, Natalia B; Wianowska, Dorota
2013-02-01
Sea Sand Disruption Method (SSDM) is a simple and cheap sample-preparation procedure allowing the reduction of organic solvent consumption, exclusion of sample component degradation, improvement of extraction efficiency and selectivity, and elimination of additional sample clean-up and pre-concentration step before chromatographic analysis. This article deals with the possibility of SSDM application for the differentiation of essential-oils components occurring in the Scots pine (Pinus sylvestris L.) and cypress (Cupressus sempervirens L.) needles from Madrid (Spain), Laganas (Zakhyntos, Greece), Cala Morell (Menorca, Spain), Lublin (Poland), Helsinki (Finland), and Oradea (Romania). The SSDM results are related to the analogous - obtained applying two other sample preparation methods - steam distillation and Pressurized Liquid Extraction (PLE). The results presented established that the total amount and the composition of essential-oil components revealed by SSDM are equivalent or higher than those obtained by one of the most effective extraction technique, PLE. Moreover, SSDM seems to provide the most representative profile of all essential-oil components as no heat is applied. Thus, this environmentally friendly method is suggested to be used as the main extraction procedure for the differentiation of essential-oil components in conifers for scientific and industrial purposes. Copyright © 2013 Verlag Helvetica Chimica Acta AG, Zürich.
Methods and apparatuses for reagent delivery, reactive barrier formation, and pest control
Gilmore, Tyler [Pasco, WA; Kaplan, Daniel I [Aiken, SC; Last, George [Richland, WA
2002-07-09
A reagent delivery method includes positioning reagent delivery tubes in contact with soil. The tubes can include a wall that is permeable to a soil-modifying reagent. The method further includes supplying the reagent in the tubes, diffusing the reagent through the permeable wall and into the soil, and chemically modifying a selected component of the soil using the reagent. The tubes can be in subsurface contact with soil, including groundwater, and can be placed with directional drilling equipment independent of groundwater well casings. The soil-modifying reagent includes a variety of gases, liquids, colloids, and adsorbents that may be reactive or non-reactive with soil components. The method may be used inter alia to form reactive barriers, control pests, and enhance soil nutrients for microbes and plants.
Antithrombogenic and antibiotic compositions and methods of preparation thereof
Hermes, R.E.
1988-04-19
Antithrombogenic and antibiotic composition of matter and method of preparation thereof. A random copolymer of a component of garlic and a biocompatible polymer has been prepared and found to exhibit antithrombogenic and antibiotic properties. Polymerization occurs selectively at the vinyl moiety in 2-vinyl-4H-1,3-dithiin when copolymerized with N-vinyl pyrrolidone. 4 figs., 2 tabs.
A variant selection model for predicting the transformation texture of deformed austenite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butron-Guillen, M.P.; Jonas, J.J.; Da Costa Viana, C.S.
1997-09-01
The occurrence of variant selection during the transformation of deformed austenite is examined, together with its effect on the product texture. A new prediction method is proposed based on the morphology of the austenite grains, on slip activity, and on the residual stresses remaining in the material after rolling. The aspect ratio of pancaked grains is demonstrated to play an important role in favoring selection of the transformed copper ({l_brace}311{r_brace}<011> and {l_brace}211{r_brace}<011>) components. The extent of shear on active slip planes during prior rolling is shown to promote the formation of the transformed brass ({l_brace}332{r_brace}<113> and {l_brace}211{r_brace}<113>) components. Finally, themore » residual stresses remaining in the material after rolling play an essential part by preventing growth of the {l_brace}110{r_brace}<110> and {l_brace}100{r_brace} orientations selected by the grain shape and slip activity rules. With the aid of these three variant selection criteria combined, it is possible to reproduce all the features of the transformation textures observed experimentally. The criteria also explain why the intensities of the transformed copper components are sensitive to the pancaking strain, while those of the transformed brass are a function of the cooling rate employed after hot rolling.« less
Towards automatic planning for manufacturing generative processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
CALTON,TERRI L.
2000-05-24
Generative process planning describes methods process engineers use to modify manufacturing/process plans after designs are complete. A completed design may be the result from the introduction of a new product based on an old design, an assembly upgrade, or modified product designs used for a family of similar products. An engineer designs an assembly and then creates plans capturing manufacturing processes, including assembly sequences, component joining methods, part costs, labor costs, etc. When new products originate as a result of an upgrade, component geometry may change, and/or additional components and subassemblies may be added to or are omitted from themore » original design. As a result process engineers are forced to create new plans. This is further complicated by the fact that the process engineer is forced to manually generate these plans for each product upgrade. To generate new assembly plans for product upgrades, engineers must manually re-specify the manufacturing plan selection criteria and re-run the planners. To remedy this problem, special-purpose assembly planning algorithms have been developed to automatically recognize design modifications and automatically apply previously defined manufacturing plan selection criteria and constraints.« less
Classification and treatment of periprosthetic supracondylar femur fractures.
Ricci, William
2013-02-01
Locked plating and retrograde nailing are two accepted methods for treatment of periprosthetic distal femur fractures. Each has relative benefits and potential pitfalls. Appropriate patient selection and knowledge of the specific femoral component geometry are required to optimally choose between these two methods. Locked plating may be applied to most periprosthetic distal femur fractures. The fracture pattern, simple or comminuted, will dictate the specific plating technique, compression plating or bridge plating. Nailing requires an open intercondylar box and a distal fragment of enough size to allow interlocking. With proper patient selection and proper techniques, good results can be obtained with either method. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
NASA Astrophysics Data System (ADS)
Yuan, Ye; Ries, Ludwig; Petermeier, Hannes; Steinbacher, Martin; Gómez-Peláez, Angel J.; Leuenberger, Markus C.; Schumacher, Marcus; Trickl, Thomas; Couret, Cedric; Meinhardt, Frank; Menzel, Annette
2018-03-01
Critical data selection is essential for determining representative baseline levels of atmospheric trace gases even at remote measurement sites. Different data selection techniques have been used around the world, which could potentially lead to reduced compatibility when comparing data from different stations. This paper presents a novel statistical data selection method named adaptive diurnal minimum variation selection (ADVS) based on CO2 diurnal patterns typically occurring at elevated mountain stations. Its capability and applicability were studied on records of atmospheric CO2 observations at six Global Atmosphere Watch stations in Europe, namely, Zugspitze-Schneefernerhaus (Germany), Sonnblick (Austria), Jungfraujoch (Switzerland), Izaña (Spain), Schauinsland (Germany), and Hohenpeissenberg (Germany). Three other frequently applied statistical data selection methods were included for comparison. Among the studied methods, our ADVS method resulted in a lower fraction of data selected as a baseline with lower maxima during winter and higher minima during summer in the selected data. The measured time series were analyzed for long-term trends and seasonality by a seasonal-trend decomposition technique. In contrast to unselected data, mean annual growth rates of all selected datasets were not significantly different among the sites, except for the data recorded at Schauinsland. However, clear differences were found in the annual amplitudes as well as the seasonal time structure. Based on a pairwise analysis of correlations between stations on the seasonal-trend decomposed components by statistical data selection, we conclude that the baseline identified by the ADVS method is a better representation of lower free tropospheric (LFT) conditions than baselines identified by the other methods.
Design, fabrication and test of graphite/epoxy metering truss structure components, phase 3
NASA Technical Reports Server (NTRS)
1974-01-01
The design, materials, tooling, manufacturing processes, quality control, test procedures, and results associated with the fabrication and test of graphite/epoxy metering truss structure components exhibiting a near zero coefficient of thermal expansion are described. Analytical methods were utilized, with the aid of a computer program, to define the most efficient laminate configurations in terms of thermal behavior and structural requirements. This was followed by an extensive material characterization and selection program, conducted for several graphite/graphite/hybrid laminate systems to obtain experimental data in support of the analytical predictions. Mechanical property tests as well as the coefficient of thermal expansion tests were run on each laminate under study, the results of which were used as the selection criteria for the single most promising laminate. Further coefficient of thermal expansion measurement was successfully performed on three subcomponent tubes utilizing the selected laminate.
Derivative component analysis for mass spectral serum proteomic profiles.
Han, Henry
2014-01-01
As a promising way to transform medicine, mass spectrometry based proteomics technologies have seen a great progress in identifying disease biomarkers for clinical diagnosis and prognosis. However, there is a lack of effective feature selection methods that are able to capture essential data behaviors to achieve clinical level disease diagnosis. Moreover, it faces a challenge from data reproducibility, which means that no two independent studies have been found to produce same proteomic patterns. Such reproducibility issue causes the identified biomarker patterns to lose repeatability and prevents it from real clinical usage. In this work, we propose a novel machine-learning algorithm: derivative component analysis (DCA) for high-dimensional mass spectral proteomic profiles. As an implicit feature selection algorithm, derivative component analysis examines input proteomics data in a multi-resolution approach by seeking its derivatives to capture latent data characteristics and conduct de-noising. We further demonstrate DCA's advantages in disease diagnosis by viewing input proteomics data as a profile biomarker via integrating it with support vector machines to tackle the reproducibility issue, besides comparing it with state-of-the-art peers. Our results show that high-dimensional proteomics data are actually linearly separable under proposed derivative component analysis (DCA). As a novel multi-resolution feature selection algorithm, DCA not only overcomes the weakness of the traditional methods in subtle data behavior discovery, but also suggests an effective resolution to overcoming proteomics data's reproducibility problem and provides new techniques and insights in translational bioinformatics and machine learning. The DCA-based profile biomarker diagnosis makes clinical level diagnostic performances reproducible across different proteomic data, which is more robust and systematic than the existing biomarker discovery based diagnosis. Our findings demonstrate the feasibility and power of the proposed DCA-based profile biomarker diagnosis in achieving high sensitivity and conquering the data reproducibility issue in serum proteomics. Furthermore, our proposed derivative component analysis suggests the subtle data characteristics gleaning and de-noising are essential in separating true signals from red herrings for high-dimensional proteomic profiles, which can be more important than the conventional feature selection or dimension reduction. In particular, our profile biomarker diagnosis can be generalized to other omics data for derivative component analysis (DCA)'s nature of generic data analysis.
Colquhoun, Heather L; Squires, Janet E; Kolehmainen, Niina; Fraser, Cynthia; Grimshaw, Jeremy M
2017-03-04
Systematic reviews consistently indicate that interventions to change healthcare professional (HCP) behaviour are haphazardly designed and poorly specified. Clarity about methods for designing and specifying interventions is needed. The objective of this review was to identify published methods for designing interventions to change HCP behaviour. A search of MEDLINE, Embase, and PsycINFO was conducted from 1996 to April 2015. Using inclusion/exclusion criteria, a broad screen of abstracts by one rater was followed by a strict screen of full text for all potentially relevant papers by three raters. An inductive approach was first applied to the included studies to identify commonalities and differences between the descriptions of methods across the papers. Based on this process and knowledge of related literatures, we developed a data extraction framework that included, e.g. level of change (e.g. individual versus organization); context of development; a brief description of the method; tasks included in the method (e.g. barrier identification, component selection, use of theory). 3966 titles and abstracts and 64 full-text papers were screened to yield 15 papers included in the review, each outlining one design method. All of the papers reported methods developed within a specific context. Thirteen papers included barrier identification and 13 included linking barriers to intervention components; although not the same 13 papers. Thirteen papers targeted individual HCPs with only one paper targeting change across individual, organization, and system levels. The use of theory and user engagement were included in 13/15 and 13/15 papers, respectively. There is an agreement across methods of four tasks that need to be completed when designing individual-level interventions: identifying barriers, selecting intervention components, using theory, and engaging end-users. Methods also consist of further additional tasks. Examples of methods for designing the organisation and system-level interventions were limited. Further analysis of design tasks could facilitate the development of detailed guidelines for designing interventions.
Comminuting irradiated ferritic steel
Bauer, Roger E.; Straalsund, Jerry L.; Chin, Bryan A.
1985-01-01
Disclosed is a method of comminuting irradiated ferritic steel by placing the steel in a solution of a compound selected from the group consisting of sulfamic acid, bisulfate, and mixtures thereof. The ferritic steel is used as cladding on nuclear fuel rods or other irradiated components.
Chen, Jing; Wang, Shu-Mei; Meng, Jiang; Sun, Fei; Liang, Sheng-Wang
2013-05-01
To establish a new method for quality evaluation and validate its feasibilities by simultaneous quantitative assay of five alkaloids in Sophora flavescens. The new quality evaluation method, quantitative analysis of multi-components by single marker (QAMS), was established and validated with S. flavescens. Five main alkaloids, oxymatrine, sophocarpine, matrine, oxysophocarpine and sophoridine, were selected as analytes to evaluate the quality of rhizome of S. flavescens, and the relative correction factor has good repeatibility. Their contents in 21 batches of samples, collected from different areas, were determined by both external standard method and QAMS. The method was evaluated by comparison of the quantitative results between external standard method and QAMS. No significant differences were found in the quantitative results of five alkaloids in 21 batches of S. flavescens determined by external standard method and QAMS. It is feasible and suitable to evaluate the quality of rhizome of S. flavescens by QAMS.
NASA Technical Reports Server (NTRS)
Ohtani, S.; Kokubun, S.; Russell, C. T.
1992-01-01
A new method is used to examine the radial expansion of the tail current disruption and the substorm onset region. The expansion of the disruption region is specified by examining the time sequence (phase relationship) between the north-south component and the sun-earth component. This method is tested by applying it to the March 6, 1979, event. The phase relationship indicates that the current disruption started on the earthward side of the spacecraft, and expanded tailward past the spacecraft. The method was used for 13 events selected from the ISEE magnetometer data. The results indicate that the current disruption usually starts in the near-earth magnetotail and often within 15 RE from the earth.
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.
Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen
2011-04-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property
Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen
2010-01-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586
On 3-D inelastic analysis methods for hot section components (base program)
NASA Technical Reports Server (NTRS)
Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.
1986-01-01
A 3-D Inelastic Analysis Method program is described. This program consists of a series of new computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of: (1) combustor liners, (2) turbine blades, and (3) turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain)and global (dynamics, buckling) structural behavior of the three selected components. Three computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (Marc-Hot Section Technology), and BEST (Boundary Element Stress Technology), have been developed and are briefly described in this report.
NASA Astrophysics Data System (ADS)
Kistenev, Yury V.; Karapuzikov, Alexander I.; Kostyukova, Nadezhda Yu.; Starikova, Marina K.; Boyko, Andrey A.; Bukreeva, Ekaterina B.; Bulanova, Anna A.; Kolker, Dmitry B.; Kuzmin, Dmitry A.; Zenov, Konstantin G.; Karapuzikov, Alexey A.
2015-06-01
A human exhaled air analysis by means of infrared (IR) laser photoacoustic spectroscopy is presented. Eleven healthy nonsmoking volunteers (control group) and seven patients with chronic obstructive pulmonary disease (COPD, target group) were involved in the study. The principal component analysis method was used to select the most informative ranges of the absorption spectra of patients' exhaled air in terms of the separation of the studied groups. It is shown that the data of the profiles of exhaled air absorption spectrum in the informative ranges allow identifying COPD patients in comparison to the control group.
Method of synthesizing a low density material
Lorensen, L.E.; Monaco, S.B.
1987-02-27
A novel method of synthesizing a polymeric material of low density of the order of 50mg/cc or less. Such a low density material has applications in many areas including laser target fabrication. The method comprises preparing a polymer blend of two incompatible polymers as a major and a minor phase by mixing them and extruding the mixture, and then selectively extracting the major component, to yield a fine, low density structure.
Wang, Lin-Yan; Tang, Yu-Ping; Liu, Xin; Ge, Ya-Hui; Li, Shu-Jiao; Shang, Er-Xin; Duan, Jin-Ao
2014-04-01
To establish a method for studying efficacious materials of traditional Chinese medicines from an overall perspective. Carthamus tinctorius was taken the example. Its major components were depleted by preparing liquid chromatography. Afterwards, the samples with major components depleted were evaluated for their antioxidant effect, so as to compare and analyze the major efficacious materials of C. tinctorius with antioxidant activity and the contributions. Seven major components were depleted from C. tinctorius samples, and six of them were identified with MS data and control comparison. After all of the samples including depleted materials are compared and evaluated for their antioxidant effect, the findings showed that hydroxysafflor yellow A, anhydrosafflor yellow B and 6-hydroxykaempferol-3, 6-di-O-glucoside-7-O-glucuronide were the major efficacious materials. This study explored a novel and effective method for studying efficacious materials of traditional Chinese medicines. Through this method, we could explain the direct and indirect contributions of different components to the efficacy of traditional Chinese medicines, and make the efficacious material expression of traditional Chinese medicines clearer.
Sensors for ceramic components in advanced propulsion systems
NASA Technical Reports Server (NTRS)
Koller, A. C.; Bennethum, W. H.; Burkholder, S. D.; Brackett, R. R.; Harris, J. P.
1995-01-01
This report includes: (1) a survey of the current methods for the measurement of surface temperature of ceramic materials suitable for use as hot section flowpath components in aircraft gas turbine engines; (2) analysis and selection of three sensing techniques with potential to extend surface temperature measurement capability beyond current limits; and (3) design, manufacture, and evaluation of the three selected techniques which include the following: platinum rhodium thin film thermocouple on alumina and mullite substrates; doped silicon carbide thin film thermocouple on silicon carbide, silicon nitride, and aluminum nitride substrates; and long and short wavelength radiation pyrometry on the substrates listed above plus yttria stabilized zirconia. Measurement of surface emittance of these materials at elevated temperature was included as part of this effort.
Multidimensional Programming Methods for Energy Facility Siting: Alternative Approaches
NASA Technical Reports Server (NTRS)
Solomon, B. D.; Haynes, K. E.
1982-01-01
The use of multidimensional optimization methods in solving power plant siting problems, which are characterized by several conflicting, noncommensurable objectives is addressed. After a discussion of data requirements and exclusionary site screening methods for bounding the decision space, classes of multiobjective and goal programming models are discussed in the context of finite site selection. Advantages and limitations of these approaches are highlighted and the linkage of multidimensional methods with the subjective, behavioral components of the power plant siting process is emphasized.
Duggan, Brendan M; Rae, Anne M; Clements, Dylan N; Hocking, Paul M
2017-05-02
Genetic progress in selection for greater body mass and meat yield in poultry has been associated with an increase in gait problems which are detrimental to productivity and welfare. The incidence of suboptimal gait in breeding flocks is controlled through the use of a visual gait score, which is a subjective assessment of walking ability of each bird. The subjective nature of the visual gait score has led to concerns over its effectiveness in reducing the incidence of suboptimal gait in poultry through breeding. The aims of this study were to assess the reliability of the current visual gait scoring system in ducks and to develop a more objective method to select for better gait. Experienced gait scorers assessed short video clips of walking ducks to estimate the reliability of the current visual gait scoring system. Kendall's coefficients of concordance between and within observers were estimated at 0.49 and 0.75, respectively. In order to develop a more objective scoring system, gait components were visually scored on more than 4000 pedigreed Pekin ducks and genetic parameters were estimated for these components. Gait components, which are a more objective measure, had heritabilities that were as good as, or better than, those of the overall visual gait score. Measurement of gait components is simpler and therefore more objective than the standard visual gait score. The recording of gait components can potentially be automated, which may increase accuracy further and may improve heritability estimates. Genetic correlations were generally low, which suggests that it is possible to use gait components to select for an overall improvement in both economic traits and gait as part of a balanced breeding programme.
EMISSIONS FROM COATINGS USED IN THE AUTO REFINISHING INDUSTRY
The report presents results of EPA Methods 24 and 311 analyses of the volatile organic compound (VOC) content of selected auto refinishing coatings and their components that are sold by the five major auto coating manufacturers. These analyses were undertaken to determine the acc...
Plenis, Alina; Olędzka, Ilona; Bączek, Tomasz
2013-05-05
This paper focuses on a comparative study of the column classification system based on the quantitative structure-retention relationships (QSRR method) and column performance in real biomedical analysis. The assay was carried out for the LC separation of moclobemide and its metabolites in human plasma, using a set of 24 stationary phases. The QSRR models established for the studied stationary phases were compared with the column test performance results under two chemometric techniques - the principal component analysis (PCA) and the hierarchical clustering analysis (HCA). The study confirmed that the stationary phase classes found closely related by the QSRR approach yielded comparable separation for moclobemide and its metabolites. Therefore, the QSRR method could be considered supportive in the selection of a suitable column for the biomedical analysis offering the selection of similar or dissimilar columns with a relatively higher certainty. Copyright © 2013 Elsevier B.V. All rights reserved.
A time domain frequency-selective multivariate Granger causality approach.
Leistritz, Lutz; Witte, Herbert
2016-08-01
The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng
An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classificationmore » rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.« less
Aytug, Tolga [Knoxville, TN; Paranthaman, Mariappan Parans [Knoxville, TN; Polat, Ozgur [Knoxville, TN
2012-07-17
An electronic component that includes a substrate and a phase-separated layer supported on the substrate and a method of forming the same are disclosed. The phase-separated layer includes a first phase comprising lanthanum manganate (LMO) and a second phase selected from a metal oxide (MO), metal nitride (MN), a metal (Me), and combinations thereof. The phase-separated material can be an epitaxial layer and an upper surface of the phase-separated layer can include interfaces between the first phase and the second phase. The phase-separated layer can be supported on a buffer layer comprising a composition selected from the group consisting of IBAD MgO, LMO/IBAD-MgO, homoepi-IBAD MgO and LMO/homoepi-MgO. The electronic component can also include an electronically active layer supported on the phase-separated layer. The electronically active layer can be a superconducting material, a ferroelectric material, a multiferroic material, a magnetic material, a photovoltaic material, an electrical storage material, and a semiconductor material.
Zhang, Hongkai; Torkamani, Ali; Jones, Teresa M; Ruiz, Diana I; Pons, Jaume; Lerner, Richard A
2011-08-16
Use of large combinatorial antibody libraries and next-generation sequencing of nucleic acids are two of the most powerful methods in modern molecular biology. The libraries are screened using the principles of evolutionary selection, albeit in real time, to enrich for members with a particular phenotype. This selective process necessarily results in the loss of information about less-fit molecules. On the other hand, sequencing of the library, by itself, gives information that is mostly unrelated to phenotype. If the two methods could be combined, the full potential of very large molecular libraries could be realized. Here we report the implementation of a phenotype-information-phenotype cycle that integrates information and gene recovery. After selection for phage-encoded antibodies that bind to targets expressed on the surface of Escherichia coli, the information content of the selected pool is obtained by pyrosequencing. Sequences that encode specific antibodies are identified by a bioinformatic analysis and recovered by a stringent affinity method that is uniquely suited for gene isolation from a highly degenerate collection of nucleic acids. This approach can be generalized for selection of antibodies against targets that are present as minor components of complex systems.
Bossi, Rossana; Rastogi, Suresh C; Bernard, Guillaume; Gimenez-Arnau, Elena; Johansen, Jeanne D; Lepoittevin, Jean-Pierre; Menné, Torkil
2004-05-01
This paper describes a validated liquid chromatographic-tandem mass spectrometric method for quantitative analysis of the potential oak moss allergens atranol and chloroatranol in perfumes and similar products. The method employs LC-MS-MS with electrospray ionization (ESI) in negative mode. The compounds are analysed by selective reaction monitoring (SRM) of 2 or 3 ions for each compound in order to obtain high selectivity and sensitivity. The method has been validated for the following parameters: linearity; repeatability; recovery; limit of detection; and limit of quantification. The limits of detection, 5.0 ng/mL and 2.4 ng/mL, respectively, for atranol and chloroatranol, achieved by this method allowed identification of these compounds at concentrations below those causing allergic skin reactions in oak-moss-sensitive patients. The recovery of chloratranol from spiked perfumes was 96+/-4%. Low recoveries (49+/-5%) were observed for atranol in spiked perfumes, indicating ion suppression caused by matrix components. The method has been applied to the analysis of 10 randomly selected perfumes and similar products.
Enhancement Strategies for Frame-To Uas Stereo Visual Odometry
NASA Astrophysics Data System (ADS)
Kersten, J.; Rodehorst, V.
2016-06-01
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.
Rubalcaba, J G; Polo, V; Maia, R; Rubenstein, D R; Veiga, J P
2016-08-01
Although sexual selection is typically considered the predominant force driving the evolution of ritualized sexual behaviours, natural selection may also play an important and often underappreciated role. The use of green aromatic plants among nesting birds has been interpreted as a component of extended phenotype that evolved either via natural selection due to potential sanitary functions or via sexual selection as a signal of male attractiveness. Here, we compared both hypotheses using comparative methods in starlings, a group where this behaviour is widespread. We found that the use of green plants was positively related to male-biased size dimorphism and that it was most likely to occur among cavity-nesting species. These results suggest that this behaviour is likely favoured by sexual selection, but also related to its sanitary use in response to higher parasite loads in cavities. We speculate that the use of green plants in starlings may be facilitated by cavity nesting and was subsequently co-opted as a sexual signal by males. Our results represent an example of how an extended phenotypic component of males becomes sexually selected by females. Thus, both natural selection and sexual selection are necessary to fully understand the evolution of ritualized behaviours involved in courtship. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.
Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture
NASA Technical Reports Server (NTRS)
Gloersen, Per (Inventor)
2004-01-01
An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.
Bostanov, Vladimir; Kotchoubey, Boris
2006-12-01
This study was aimed at developing a method for extraction and assessment of event-related brain potentials (ERP) from single-trials. This method should be applicable in the assessment of single persons' ERPs and should be able to handle both single ERP components and whole waveforms. We adopted a recently developed ERP feature extraction method, the t-CWT, for the purposes of hypothesis testing in the statistical assessment of ERPs. The t-CWT is based on the continuous wavelet transform (CWT) and Student's t-statistics. The method was tested in two ERP paradigms, oddball and semantic priming, by assessing individual-participant data on a single-trial basis, and testing the significance of selected ERP components, P300 and N400, as well as of whole ERP waveforms. The t-CWT was also compared to other univariate and multivariate ERP assessment methods: peak picking, area computation, discrete wavelet transform (DWT) and principal component analysis (PCA). The t-CWT produced better results than all of the other assessment methods it was compared with. The t-CWT can be used as a reliable and powerful method for ERP-component detection and testing of statistical hypotheses concerning both single ERP components and whole waveforms extracted from either single persons' or group data. The t-CWT is the first such method based explicitly on the criteria of maximal statistical difference between two average ERPs in the time-frequency domain and is particularly suitable for ERP assessment of individual data (e.g. in clinical settings), but also for the investigation of small and/or novel ERP effects from group data.
NASA Astrophysics Data System (ADS)
He, A.; Quan, C.
2018-04-01
The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.
Pattern transfer printing by kinetic control of adhesion to an elastomeric stamp
Nuzzo, Ralph G [Champaign, IL; Rogers, John A [Champaign, IL; Menard, Etienne [Urbana, IL; Lee, Keon Jae [Tokyo, JP; Khang, Dahl-Young [Urbana, IL; Sun, Yugang [Champaign, IL; Meitl, Matthew [Champaign, IL; Zhu, Zhengtao [Urbana, IL
2011-05-17
The present invention provides methods, systems and system components for transferring, assembling and integrating features and arrays of features having selected nanosized and/or microsized physical dimensions, shapes and spatial orientations. Methods of the present invention utilize principles of `soft adhesion` to guide the transfer, assembly and/or integration of features, such as printable semiconductor elements or other components of electronic devices. Methods of the present invention are useful for transferring features from a donor substrate to the transfer surface of an elastomeric transfer device and, optionally, from the transfer surface of an elastomeric transfer device to the receiving surface of a receiving substrate. The present methods and systems provide highly efficient, registered transfer of features and arrays of features, such as printable semiconductor element, in a concerted manner that maintains the relative spatial orientations of transferred features.
Andreoli, Daria; Volpe, Giorgio; Popoff, Sébastien; Katz, Ori; Grésillon, Samuel; Gigan, Sylvain
2015-01-01
We present a method to measure the spectrally-resolved transmission matrix of a multiply scattering medium, thus allowing for the deterministic spatiospectral control of a broadband light source by means of wavefront shaping. As a demonstration, we show how the medium can be used to selectively focus one or many spectral components of a femtosecond pulse, and how it can be turned into a controllable dispersive optical element to spatially separate different spectral components to arbitrary positions. PMID:25965944
A semiparametric graphical modelling approach for large-scale equity selection
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507
The software-cycle model for re-engineering and reuse
NASA Technical Reports Server (NTRS)
Bailey, John W.; Basili, Victor R.
1992-01-01
This paper reports on the progress of a study which will contribute to our ability to perform high-level, component-based programming by describing means to obtain useful components, methods for the configuration and integration of those components, and an underlying economic model of the costs and benefits associated with this approach to reuse. One goal of the study is to develop and demonstrate methods to recover reusable components from domain-specific software through a combination of tools, to perform the identification, extraction, and re-engineering of components, and domain experts, to direct the applications of those tools. A second goal of the study is to enable the reuse of those components by identifying techniques for configuring and recombining the re-engineered software. This component-recovery or software-cycle model addresses not only the selection and re-engineering of components, but also their recombination into new programs. Once a model of reuse activities has been developed, the quantification of the costs and benefits of various reuse options will enable the development of an adaptable economic model of reuse, which is the principal goal of the overall study. This paper reports on the conception of the software-cycle model and on several supporting techniques of software recovery, measurement, and reuse which will lead to the development of the desired economic model.
Fiber optic assembly and method of making same
Kramer, D.P.; Beckman, T.M.
1997-09-02
There is provided an assembly having a light guiding medium sealed to a holder. Preferably the holder is a metal shell and a light guiding medium is an optical fiber of glass or sapphire whisker. The assembly includes a sealing medium which sealingly engages the metal holder to the fiber. In the formation of the assembly, the seal is essentially hermetic having a capability of minimizing leakage having a helium leak rate of less than 1{times}10{sup {minus}8} cubic centimeters per second and high strength having a capability of withstanding pressures of 100,000 psi or greater. The features of the assembly are obtained by a specific preparation method and by selection of specific starting materials. The fiber is selected to have a sufficiently high coefficient of thermal expansion which minimizes strains in the component during fabrication, as a result of fabrication, and during use. The other components are selected to be of a material having compatible coefficients of thermal expansion (TEC) where the TEC of the holder is greater than or equal to the TEC of the sealing material. The TEC of the sealing material is in turn greater than or equal to the TEC of the fiber. It is preferred that the materials be selected so that their respective coefficients of thermal expansion are as close as possible to one another and they may all be equal. 4 figs.
Fiber optic assembly and method of making same
Kramer, Daniel P.; Beckman, Thomas M.
1997-09-02
There is provided an assembly having a light guiding medium sealed to a her. Preferably the holder is a metal shell and a light guiding medium is an optical fiber of glass or sapphire whisker. The assembly includes a sealing medium which sealingly engages the metal holder to the fiber. In the formation of the assembly, the seal is essentially hermetic having a capability of minimizing leakage having a helium leak rate of less than 1.times.10.sup.-8 cubic centimeters per second and high strength having a capability of withstanding pressures of 100,000 psi or greater. The features of the assembly are obtained by a specific preparation method and by selection of specific starting materials. The fiber is selected to have a sufficiently high coefficient of thermal expansion which minimizes strains in the component during fabrication, as a result of fabrication, and during use. The other components are selected to be of a material having compatible coefficients of thermal expansion (TEC) where the TEC of the holder is greater than or equal to the TEC of the sealing material. The TEC of the sealing material is in turn greater than or equal to the TEC of the fiber. It is preferred that the materials be selected so that their respective coefficients of thermal expansion are as close as possible to one another and they may all be equal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hajian, Amir; Alvarez, Marcelo A.; Bond, J. Richard, E-mail: ahajian@cita.utoronto.ca, E-mail: malvarez@cita.utoronto.ca, E-mail: bond@cita.utoronto.ca
Making mock simulated catalogs is an important component of astrophysical data analysis. Selection criteria for observed astronomical objects are often too complicated to be derived from first principles. However the existence of an observed group of objects is a well-suited problem for machine learning classification. In this paper we use one-class classifiers to learn the properties of an observed catalog of clusters of galaxies from ROSAT and to pick clusters from mock simulations that resemble the observed ROSAT catalog. We show how this method can be used to study the cross-correlations of thermal Sunya'ev-Zeldovich signals with number density maps ofmore » X-ray selected cluster catalogs. The method reduces the bias due to hand-tuning the selection function and is readily scalable to large catalogs with a high-dimensional space of astrophysical features.« less
The generation of monoclonal antibodies and their use in rapid diagnostic tests
USDA-ARS?s Scientific Manuscript database
Antibodies are the most important component of an immunoassay. In these proceedings we outline novel methods used to generate and select monoclonal antibodies that meet performance criteria for use in rapid lateral flow and microfluidic immunoassay tests for the detection of agricultural pathogens ...
Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao
2012-01-01
Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.
Roy, Venkat; Simonetto, Andrea; Leus, Geert
2018-06-01
We propose a sensor placement method for spatio-temporal field estimation based on a kriged Kalman filter (KKF) using a network of static or mobile sensors. The developed framework dynamically designs the optimal constellation to place the sensors. We combine the estimation error (for the stationary as well as non-stationary component of the field) minimization problem with a sparsity-enforcing penalty to design the optimal sensor constellation in an economic manner. The developed sensor placement method can be directly used for a general class of covariance matrices (ill-conditioned or well-conditioned) modelling the spatial variability of the stationary component of the field, which acts as a correlated observation noise, while estimating the non-stationary component of the field. Finally, a KKF estimator is used to estimate the field using the measurements from the selected sensing locations. Numerical results are provided to exhibit the feasibility of the proposed dynamic sensor placement followed by the KKF estimation method.
NASA Astrophysics Data System (ADS)
Kozakov, O. N.
2002-10-01
A method of calculating the partial characteristics of radiation absorption by the components of light-scattering disperse layers is proposed. This method is based on statistical modeling (the Monte Carlo method). The absorptivities of photographic gelatin and silver bromide microcrystals and the corresponding distributions of the absorbed energy over the layer thickness are calculated using the example of an interaction between actinic radiation and silver halide photographic layers in the wavelength range λ=200 440 nm. The following structural parameters of the photographic layer are used in the calculation: the mean size of emulsion crystals d=0.5 μm; the polydispersity C V =25%; the volume concentrations C V =10, 20, and 30%; and the thickness of the emulsion layer H=10 μm.
Hyperspectral Image Denoising Using a Nonlocal Spectral Spatial Principal Component Analysis
NASA Astrophysics Data System (ADS)
Li, D.; Xu, L.; Peng, J.; Ma, J.
2018-04-01
Hyperspectral images (HSIs) denoising is a critical research area in image processing duo to its importance in improving the quality of HSIs, which has a negative impact on object detection and classification and so on. In this paper, we develop a noise reduction method based on principal component analysis (PCA) for hyperspectral imagery, which is dependent on the assumption that the noise can be removed by selecting the leading principal components. The main contribution of paper is to introduce the spectral spatial structure and nonlocal similarity of the HSIs into the PCA denoising model. PCA with spectral spatial structure can exploit spectral correlation and spatial correlation of HSI by using 3D blocks instead of 2D patches. Nonlocal similarity means the similarity between the referenced pixel and other pixels in nonlocal area, where Mahalanobis distance algorithm is used to estimate the spatial spectral similarity by calculating the distance in 3D blocks. The proposed method is tested on both simulated and real hyperspectral images, the results demonstrate that the proposed method is superior to several other popular methods in HSI denoising.
[Discrimination of varieties of brake fluid using visual-near infrared spectra].
Jiang, Lu-lu; Tan, Li-hong; Qiu, Zheng-jun; Lu, Jiang-feng; He, Yong
2008-06-01
A new method was developed to fast discriminate brands of brake fluid by means of visual-near infrared spectroscopy. Five different brands of brake fluid were analyzed using a handheld near infrared spectrograph, manufactured by ASD Company, and 60 samples were gotten from each brand of brake fluid. The samples data were pretreated using average smoothing and standard normal variable method, and then analyzed using principal component analysis (PCA). A 2-dimensional plot was drawn based on the first and the second principal components, and the plot indicated that the clustering characteristic of different brake fluid is distinct. The foregoing 6 principal components were taken as input variable, and the band of brake fluid as output variable to build the discriminate model by stepwise discriminant analysis method. Two hundred twenty five samples selected randomly were used to create the model, and the rest 75 samples to verify the model. The result showed that the distinguishing rate was 94.67%, indicating that the method proposed in this paper has good performance in classification and discrimination. It provides a new way to fast discriminate different brands of brake fluid.
Principal components analysis in clinical studies.
Zhang, Zhongheng; Castelló, Adela
2017-09-01
In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.
NASA Astrophysics Data System (ADS)
Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro
Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.
Wang, Zhi-Guo; Chen, Zeng-Ping; Gong, Fan; Wu, Hai-Long; Yu, Ru-Qin
2002-05-01
The chromatographic peak located inside another peak in the time direction is called an embedded or inner peak in contradistinction with the embedding peak, which is called an outer peak. The chemical components corresponding to inner and outer peaks are called inner and outer components, respectively. This special case of co-eluting chromatograms was investigated using chemometric approaches taking GC-MS as an example. A novel method, named inner chromatogram projection (ICP), for resolution of GC-MS data with embedded chromatographic peaks is derived. Orthogonal projection resolution is first utilized to obtain the chromatographic profile of the inner component. Projection of the two-way data matrix columnwise-normalized along the time direction to the normalized profile of the inner component found is subsequently performed to find the selective m/z points, if they exist, which represent the chromatogram of the outer component by itself. With the profiles obtained, the mass spectra can easily be found by means of a least-squares procedure. The results for both simulated data and real samples demonstrate that the proposed method is capable of achieving satisfactory resolution performance not affected by the shapes of chromatograms and the relative positions of the components involved.
Quantitative analysis of the mixtures of illicit drugs using terahertz time-domain spectroscopy
NASA Astrophysics Data System (ADS)
Jiang, Dejun; Zhao, Shusen; Shen, Jingling
2008-03-01
A method was proposed to quantitatively inspect the mixtures of illicit drugs with terahertz time-domain spectroscopy technique. The mass percentages of all components in a mixture can be obtained by linear regression analysis, on the assumption that all components in the mixture and their absorption features be known. For illicit drugs were scarce and expensive, firstly we used common chemicals, Benzophenone, Anthraquinone, Pyridoxine hydrochloride and L-Ascorbic acid in the experiment. Then illicit drugs and a common adulterant, methamphetamine and flour, were selected for our experiment. Experimental results were in significant agreement with actual content, which suggested that it could be an effective method for quantitative identification of illicit drugs.
UV recording with vinyl acetate and muicle dye film
NASA Astrophysics Data System (ADS)
Toxqui-Lopez, S.; Olivares-Pérez, A.; Santacruz-Vazquez, V.; Fuentes-Tapia, I.; Ordoñez-Padilla, J.
2015-03-01
Nowadays, there are many types of holographic recording medium some of them are photopolymer systems that generally consist of a polymeric host matrix, photopolymerizable momomer, photosensitizing dye and charge transfer agent but some of them have an undesirable feature, the toxicity of their components. Therefore, the present research study material recording, vinyl acetate is selected as polymeric matrix and natural dye from "muicle plant" is used as the photoinitiation these components are not toxic. The films are fabricated using gravity settling method at room temperature by this method, uniform films is obtained with good optical quality. To characterize the medium, been obtained when the coherent reed light (632.8 nm) was sent normally to the grating.
Ionic liquids: solvents and sorbents in sample preparation.
Clark, Kevin D; Emaus, Miranda N; Varona, Marcelino; Bowers, Ashley N; Anderson, Jared L
2018-01-01
The applications of ionic liquids (ILs) and IL-derived sorbents are rapidly expanding. By careful selection of the cation and anion components, the physicochemical properties of ILs can be altered to meet the requirements of specific applications. Reports of IL solvents possessing high selectivity for specific analytes are numerous and continue to motivate the development of new IL-based sample preparation methods that are faster, more selective, and environmentally benign compared to conventional organic solvents. The advantages of ILs have also been exploited in solid/polymer formats in which ordinarily nonspecific sorbents are functionalized with IL moieties in order to impart selectivity for an analyte or analyte class. Furthermore, new ILs that incorporate a paramagnetic component into the IL structure, known as magnetic ionic liquids (MILs), have emerged as useful solvents for bioanalytical applications. In this rapidly changing field, this Review focuses on the applications of ILs and IL-based sorbents in sample preparation with a special emphasis on liquid phase extraction techniques using ILs and MILs, IL-based solid-phase extraction, ILs in mass spectrometry, and biological applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Guimaraes, Rodrigo Soares; Delorme-Axford, Elizabeth; Klionsky, Daniel J; Reggiori, Fulvio
2015-03-01
Autophagy is a conserved intracellular catabolic pathway that degrades unnecessary or dysfunctional cellular components. Components destined for degradation are sequestered into double-membrane vesicles called autophagosomes, which subsequently fuse with the vacuole/lysosome delivering their cargo into the interior of this organelle for turnover. Autophagosomes are generated through the concerted action of the autophagy-related (Atg) proteins. The yeast Saccharomyces cerevisiae has been key in the identification of the corresponding genes and their characterization, and it remains one of the leading model systems for the investigation of the molecular mechanism and functions of autophagy. In particular, it is still pivotal for the study of selective types of autophagy. The objective of this review is to present detailed protocols of the methods available to monitor the progression of both nonselective and selective types of autophagy, and to discuss their advantages and disadvantages. The ultimate aim is to provide researchers with the information necessary to select the optimal approach to address their biological question. Copyright © 2014 Elsevier Inc. All rights reserved.
Blind column selection protocol for two-dimensional high performance liquid chromatography.
Burns, Niki K; Andrighetto, Luke M; Conlan, Xavier A; Purcell, Stuart D; Barnett, Neil W; Denning, Jacquie; Francis, Paul S; Stevenson, Paul G
2016-07-01
The selection of two orthogonal columns for two-dimensional high performance liquid chromatography (LC×LC) separation of natural product extracts can be a labour intensive and time consuming process and in many cases is an entirely trial-and-error approach. This paper introduces a blind optimisation method for column selection of a black box of constituent components. A data processing pipeline, created in the open source application OpenMS®, was developed to map the components within the mixture of equal mass across a library of HPLC columns; LC×LC separation space utilisation was compared by measuring the fractional surface coverage, fcoverage. It was found that for a test mixture from an opium poppy (Papaver somniferum) extract, the combination of diphenyl and C18 stationary phases provided a predicted fcoverage of 0.48 and was matched with an actual usage of 0.43. OpenMS®, in conjunction with algorithms designed in house, have allowed for a significantly quicker selection of two orthogonal columns, which have been optimised for a LC×LC separation of crude extractions of plant material. Copyright © 2016 Elsevier B.V. All rights reserved.
[Computer aided design and rapid manufacturing of removable partial denture frameworks].
Han, Jing; Lü, Pei-jun; Wang, Yong
2010-08-01
To introduce a method of digital modeling and fabricating removable partial denture (RPD) frameworks using self-developed software for RPD design and rapid manufacturing system. The three-dimensional data of two partially dentate dental casts were obtained using a three-dimensional crossing section scanner. Self-developed software package for RPD design was used to decide the path of insertion and to design different components of RPD frameworks. The components included occlusal rest, clasp, lingual bar, polymeric retention framework and maxillary major connector. The design procedure for the components was as following: first, determine the outline of the component. Second, build the tissue surface of the component using the scanned data within the outline. Third, preset cross section was used to produce the polished surface. Finally, different RPD components were modeled respectively and connected by minor connectors to form an integrated RPD framework. The finished data were imported into a self-developed selective laser melting (SLM) machine and metal frameworks were fabricated directly. RPD frameworks for the two scanned dental casts were modeled with this self-developed program and metal RPD frameworks were successfully fabricated using SLM method. The finished metal frameworks fit well on the plaster models. The self-developed computer aided design and computer aided manufacture (CAD-CAM) system for RPD design and fabrication has completely independent intellectual property rights. It provides a new method of manufacturing metal RPD frameworks.
Scheduled Peripheral Component Interconnect Arbiter
NASA Technical Reports Server (NTRS)
Nixon, Scott Alan (Inventor)
2015-01-01
Systems and methods are described for arbitrating access of a communication bus. In one embodiment, a method includes performing steps on one or more processors. The steps include: receiving an access request from a device of the communication bus; evaluating a bus schedule to determine an importance of the device based on the access request; and selectively granting access of the communication bus to the device based on the importance of the device.
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion systems components
NASA Technical Reports Server (NTRS)
1991-01-01
Summarized here is the technical effort and computer code developed during the five year duration of the program for probabilistic structural analysis methods. The summary includes a brief description of the computer code manuals and a detailed description of code validation demonstration cases for random vibrations of a discharge duct, probabilistic material nonlinearities of a liquid oxygen post, and probabilistic buckling of a transfer tube liner.
Detection of goal events in soccer videos
NASA Astrophysics Data System (ADS)
Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas
2005-01-01
In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.
An adaptive band selection method for dimension reduction of hyper-spectral remote sensing image
NASA Astrophysics Data System (ADS)
Yu, Zhijie; Yu, Hui; Wang, Chen-sheng
2014-11-01
Hyper-spectral remote sensing data can be acquired by imaging the same area with multiple wavelengths, and it normally consists of hundreds of band-images. Hyper-spectral images can not only provide spatial information but also high resolution spectral information, and it has been widely used in environment monitoring, mineral investigation and military reconnaissance. However, because of the corresponding large data volume, it is very difficult to transmit and store Hyper-spectral images. Hyper-spectral image dimensional reduction technique is desired to resolve this problem. Because of the High relation and high redundancy of the hyper-spectral bands, it is very feasible that applying the dimensional reduction method to compress the data volume. This paper proposed a novel band selection-based dimension reduction method which can adaptively select the bands which contain more information and details. The proposed method is based on the principal component analysis (PCA), and then computes the index corresponding to every band. The indexes obtained are then ranked in order of magnitude from large to small. Based on the threshold, system can adaptively and reasonably select the bands. The proposed method can overcome the shortcomings induced by transform-based dimension reduction method and prevent the original spectral information from being lost. The performance of the proposed method has been validated by implementing several experiments. The experimental results show that the proposed algorithm can reduce the dimensions of hyper-spectral image with little information loss by adaptively selecting the band images.
Carvalho, Carolina Abreu de; Fonsêca, Poliana Cristina de Almeida; Nobre, Luciana Neri; Priore, Silvia Eloiza; Franceschini, Sylvia do Carmo Castro
2016-01-01
The objective of this study is to provide guidance for identifying dietary patterns using the a posteriori approach, and analyze the methodological aspects of the studies conducted in Brazil that identified the dietary patterns of children. Articles were selected from the Latin American and Caribbean Literature on Health Sciences, Scientific Electronic Library Online and Pubmed databases. The key words were: Dietary pattern; Food pattern; Principal Components Analysis; Factor analysis; Cluster analysis; Reduced rank regression. We included studies that identified dietary patterns of children using the a posteriori approach. Seven studies published between 2007 and 2014 were selected, six of which were cross-sectional and one cohort, Five studies used the food frequency questionnaire for dietary assessment; one used a 24-hour dietary recall and the other a food list. The method of exploratory approach used in most publications was principal components factor analysis, followed by cluster analysis. The sample size of the studies ranged from 232 to 4231, the values of the Kaiser-Meyer-Olkin test from 0.524 to 0.873, and Cronbach's alpha from 0.51 to 0.69. Few Brazilian studies identified dietary patterns of children using the a posteriori approach and principal components factor analysis was the technique most used.
Reactive Derivatives of Nucleic Acids and Their Components as Affinity Reagents
NASA Astrophysics Data System (ADS)
Knorre, Dmitrii G.; Vlasov, Valentin V.
1985-09-01
The review is devoted to derivatives of nucleic acids and their components — nucleotides, nucleoside triphosphates, and oligonucleotides carrying reactive groups. Such derivatives are important tools for the investigation of protein-nucleic acid interactions and the functional topography of complex protein and nucleoprotein structures and can give rise to the prospect of being able to influence in a highly selective manner living organisms, including the nucleic acids and the nucleoproteins of the genetic apparatus. The review considers the principal groups of such reagents, the methods of their synthesis, and their properties which determine the possibility of their use for the selective (affinity) modification of biopolymers. The general principles of the construction of affinity reagents and their applications are analysed in relation to nucleotide affinity reagents. The bibliography includes 121 references.
Metal-organic frameworks for Xe/Kr separation
Ryan, Patrick J.; Farha, Omar K.; Broadbelt, Linda J.; Snurr, Randall Q.; Bae, Youn-Sang
2014-07-22
Metal-organic framework (MOF) materials are provided and are selectively adsorbent to xenon (Xe) over another noble gas such as krypton (Kr) and/or argon (Ar) as a result of having framework voids (pores) sized to this end. MOF materials having pores that are capable of accommodating a Xe atom but have a small enough pore size to receive no more than one Xe atom are desired to preferentially adsorb Xe over Kr in a multi-component (Xe--Kr mixture) adsorption method. The MOF material has 20% or more, preferably 40% or more, of the total pore volume in a pore size range of 0.45-0.75 nm which can selectively adsorb Xe over Kr in a multi-component Xe--Kr mixture over a pressure range of 0.01 to 1.0 MPa.
Metal-organic frameworks for Xe/Kr separation
Ryan, Patrick J.; Farha, Omar K.; Broadbelt, Linda J.; Snurr, Randall Q.; Bae, Youn-Sang
2013-08-27
Metal-organic framework (MOF) materials are provided and are selectively adsorbent to xenon (Xe) over another noble gas such as krypton (Kr) and/or argon (Ar) as a result of having framework voids (pores) sized to this end. MOF materials having pores that are capable of accommodating a Xe atom but have a small enough pore size to receive no more than one Xe atom are desired to preferentially adsorb Xe over Kr in a multi-component (Xe--Kr mixture) adsorption method. The MOF material has 20% or more, preferably 40% or more, of the total pore volume in a pore size range of 0.45-0.75 nm which can selectively adsorb Xe over Kr in a multi-component Xe--Kr mixture over a pressure range of 0.01 to 1.0 MPa.
USDA-ARS?s Scientific Manuscript database
Herbicide absorption and translocation in plants is a key component in the study of herbicide physiology, mode of action, selectivity, resistance mechanisms, and in the registration process. Radioactive herbicides have been in use for over half-a-century in the research and study of herbicide absorp...
Training in Structured Diagnostic Assessment Using DSM-IV Criteria
ERIC Educational Resources Information Center
Ponniah, Kathryn; Weissman, Myrna M.; Bledsoe, Sarah E.; Verdeli, Helen; Gameroff, Marc J.; Mufson, Laura; Fitterling, Heidi; Wickramaratne, Priya
2011-01-01
Objectives: Determining a patient's psychiatric diagnosis is an important first step for the selection of empirically supported treatments and a critical component of evidence-based practice. Structured diagnostic assessment covers the range of psychiatric diagnoses and is usually more complete and accurate than unstructured assessment. Method: We…
Gómez-Carracedo, M P; Andrade, J M; Rutledge, D N; Faber, N M
2007-03-07
Selecting the correct dimensionality is critical for obtaining partial least squares (PLS) regression models with good predictive ability. Although calibration and validation sets are best established using experimental designs, industrial laboratories cannot afford such an approach. Typically, samples are collected in an (formally) undesigned way, spread over time and their measurements are included in routine measurement processes. This makes it hard to evaluate PLS model dimensionality. In this paper, classical criteria (leave-one-out cross-validation and adjusted Wold's criterion) are compared to recently proposed alternatives (smoothed PLS-PoLiSh and a randomization test) to seek out the optimum dimensionality of PLS models. Kerosene (jet fuel) samples were measured by attenuated total reflectance-mid-IR spectrometry and their spectra where used to predict eight important properties determined using reference methods that are time-consuming and prone to analytical errors. The alternative methods were shown to give reliable dimensionality predictions when compared to external validation. By contrast, the simpler methods seemed to be largely affected by the largest changes in the modeling capabilities of the first components.
Jin, Hongli; Liu, Yanfang; Guo, Zhimou; Wang, Jixia; Zhang, Xiuli; Wang, Chaoran; Liang, Xinmiao
2016-10-25
Traditional Chinese Medicine (TCM) is an ancient medical practice which has been used to prevent and cure diseases for thousands of years. TCMs are frequently multi-component systems with mainly unidentified constituents. The study of the chemical compositions of TCMs remains a hotspot of research. Different strategies have been developed to manage the significant complexity of TCMs, in an attempt to determine their constituents. Reversed-phase liquid chromatography (RPLC) is still the method of choice for the separation of TCMs, but has many problems related to limited selectivity. Recently, enormous efforts have been concentrated on the development of efficient liquid chromatography (LC) methods for TCMs, based on selective stationary phases. This can improve the resolution and peak capacity considerably. In addition, high-efficiency stationary phases have been applied in the analysis of TCMs since the invention of ultra high-performance liquid chromatography (UHPLC). This review describes the advances in LC methods in TCM research from 2010 to date, and focuses on novel stationary phases. Their potential in the separation of TCMs using relevant applications is also demonstrated. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Xudong; Sun, Lingyu; Zhang, Cheng; Li, Lijun; Dai, Zongmiao; Xiong, Zhenkai
2018-03-01
The application of polymer composites as a substitution of metal is an effective approach to reduce vehicle weight. However, the final performance of composite structures is determined not only by the material types, structural designs and manufacturing process, but also by their mutual restrict. Hence, an integrated "material-structure-process-performance" method is proposed for the conceptual and detail design of composite components. The material selection is based on the principle of composite mechanics such as rule of mixture for laminate. The design of component geometry, dimension and stacking sequence is determined by parametric modeling and size optimization. The selection of process parameters are based on multi-physical field simulation. The stiffness and modal constraint conditions were obtained from the numerical analysis of metal benchmark under typical load conditions. The optimal design was found by multi-discipline optimization. Finally, the proposed method was validated by an application case of automotive hatchback using carbon fiber reinforced polymer. Compared with the metal benchmark, the weight of composite one reduces 38.8%, simultaneously, its torsion and bending stiffness increases 3.75% and 33.23%, respectively, and the first frequency also increases 44.78%.
NASA Astrophysics Data System (ADS)
Polat, Esra; Gunay, Suleyman
2013-10-01
One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.
Engen, Steinar; Saether, Bernt-Erik
2014-03-01
We analyze the stochastic components of the Robertson-Price equation for the evolution of quantitative characters that enables decomposition of the selection differential into components due to demographic and environmental stochasticity. We show how these two types of stochasticity affect the evolution of multivariate quantitative characters by defining demographic and environmental variances as components of individual fitness. The exact covariance formula for selection is decomposed into three components, the deterministic mean value, as well as stochastic demographic and environmental components. We show that demographic and environmental stochasticity generate random genetic drift and fluctuating selection, respectively. This provides a common theoretical framework for linking ecological and evolutionary processes. Demographic stochasticity can cause random variation in selection differentials independent of fluctuating selection caused by environmental variation. We use this model of selection to illustrate that the effect on the expected selection differential of random variation in individual fitness is dependent on population size, and that the strength of fluctuating selection is affected by how environmental variation affects the covariance in Malthusian fitness between individuals with different phenotypes. Thus, our approach enables us to partition out the effects of fluctuating selection from the effects of selection due to random variation in individual fitness caused by demographic stochasticity. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
Chen, Bor-Sen
2016-01-01
Bacteria navigate environments full of various chemicals to seek favorable places for survival by controlling the flagella’s rotation using a complicated signal transduction pathway. By influencing the pathway, bacteria can be engineered to search for specific molecules, which has great potential for application to biomedicine and bioremediation. In this study, genetic circuits were constructed to make bacteria search for a specific molecule at particular concentrations in their environment through a synthetic biology method. In addition, by replacing the “brake component” in the synthetic circuit with some specific sensitivities, the bacteria can be engineered to locate areas containing specific concentrations of the molecule. Measured by the swarm assay qualitatively and microfluidic techniques quantitatively, the characteristics of each “brake component” were identified and represented by a mathematical model. Furthermore, we established another mathematical model to anticipate the characteristics of the “brake component”. Based on this model, an abundant component library can be established to provide adequate component selection for different searching conditions without identifying all components individually. Finally, a systematic design procedure was proposed. Following this systematic procedure, one can design a genetic circuit for bacteria to rapidly search for and locate different concentrations of particular molecules by selecting the most adequate “brake component” in the library. Moreover, following simple procedures, one can also establish an exclusive component library suitable for other cultivated environments, promoter systems, or bacterial strains. PMID:27096615
Lee, Byeong-Ju; Kim, Hye-Youn; Lim, Sa Rang; Huang, Linfang; Choi, Hyung-Kyoon
2017-01-01
Panax ginseng C.A. Meyer is a herb used for medicinal purposes, and its discrimination according to cultivation age has been an important and practical issue. This study employed Fourier-transform infrared (FT-IR) spectroscopy with multivariate statistical analysis to obtain a prediction model for discriminating cultivation ages (5 and 6 years) and three different parts (rhizome, tap root, and lateral root) of P. ginseng. The optimal partial-least-squares regression (PLSR) models for discriminating ginseng samples were determined by selecting normalization methods, number of partial-least-squares (PLS) components, and variable influence on projection (VIP) cutoff values. The best prediction model for discriminating 5- and 6-year-old ginseng was developed using tap root, vector normalization applied after the second differentiation, one PLS component, and a VIP cutoff of 1.0 (based on the lowest root-mean-square error of prediction value). In addition, for discriminating among the three parts of P. ginseng, optimized PLSR models were established using data sets obtained from vector normalization, two PLS components, and VIP cutoff values of 1.5 (for 5-year-old ginseng) and 1.3 (for 6-year-old ginseng). To our knowledge, this is the first study to provide a novel strategy for rapidly discriminating the cultivation ages and parts of P. ginseng using FT-IR by selected normalization methods, number of PLS components, and VIP cutoff values.
Lim, Sa Rang; Huang, Linfang
2017-01-01
Panax ginseng C.A. Meyer is a herb used for medicinal purposes, and its discrimination according to cultivation age has been an important and practical issue. This study employed Fourier-transform infrared (FT-IR) spectroscopy with multivariate statistical analysis to obtain a prediction model for discriminating cultivation ages (5 and 6 years) and three different parts (rhizome, tap root, and lateral root) of P. ginseng. The optimal partial-least-squares regression (PLSR) models for discriminating ginseng samples were determined by selecting normalization methods, number of partial-least-squares (PLS) components, and variable influence on projection (VIP) cutoff values. The best prediction model for discriminating 5- and 6-year-old ginseng was developed using tap root, vector normalization applied after the second differentiation, one PLS component, and a VIP cutoff of 1.0 (based on the lowest root-mean-square error of prediction value). In addition, for discriminating among the three parts of P. ginseng, optimized PLSR models were established using data sets obtained from vector normalization, two PLS components, and VIP cutoff values of 1.5 (for 5-year-old ginseng) and 1.3 (for 6-year-old ginseng). To our knowledge, this is the first study to provide a novel strategy for rapidly discriminating the cultivation ages and parts of P. ginseng using FT-IR by selected normalization methods, number of PLS components, and VIP cutoff values. PMID:29049369
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Islam, Md Rabiul; Tanaka, Toshihisa; Molla, Md Khademul Islam
2018-05-08
When designing multiclass motor imagery-based brain-computer interface (MI-BCI), a so-called tangent space mapping (TSM) method utilizing the geometric structure of covariance matrices is an effective technique. This paper aims to introduce a method using TSM for finding accurate operational frequency bands related brain activities associated with MI tasks. A multichannel electroencephalogram (EEG) signal is decomposed into multiple subbands, and tangent features are then estimated on each subband. A mutual information analysis-based effective algorithm is implemented to select subbands containing features capable of improving motor imagery classification accuracy. Thus obtained features of selected subbands are combined to get feature space. A principal component analysis-based approach is employed to reduce the features dimension and then the classification is accomplished by a support vector machine (SVM). Offline analysis demonstrates the proposed multiband tangent space mapping with subband selection (MTSMS) approach outperforms state-of-the-art methods. It acheives the highest average classification accuracy for all datasets (BCI competition dataset 2a, IIIa, IIIb, and dataset JK-HH1). The increased classification accuracy of MI tasks with the proposed MTSMS approach can yield effective implementation of BCI. The mutual information-based subband selection method is implemented to tune operation frequency bands to represent actual motor imagery tasks.
Zeroth order regular approximation approach to electric dipole moment interactions of the electron.
Gaul, Konstantin; Berger, Robert
2017-07-07
A quasi-relativistic two-component approach for an efficient calculation of P,T-odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.
Zeroth order regular approximation approach to electric dipole moment interactions of the electron
NASA Astrophysics Data System (ADS)
Gaul, Konstantin; Berger, Robert
2017-07-01
A quasi-relativistic two-component approach for an efficient calculation of P ,T -odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.
Efficient feature selection using a hybrid algorithm for the task of epileptic seizure detection
NASA Astrophysics Data System (ADS)
Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline
2014-07-01
Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.
Morishige, Ken-ichi; Yoshioka, Taku; Kawawaki, Dai; Hiroe, Nobuo; Sato, Masa-aki; Kawato, Mitsuo
2014-11-01
One of the major obstacles in estimating cortical currents from MEG signals is the disturbance caused by magnetic artifacts derived from extra-cortical current sources such as heartbeats and eye movements. To remove the effect of such extra-brain sources, we improved the hybrid hierarchical variational Bayesian method (hyVBED) proposed by Fujiwara et al. (NeuroImage, 2009). hyVBED simultaneously estimates cortical and extra-brain source currents by placing dipoles on cortical surfaces as well as extra-brain sources. This method requires EOG data for an EOG forward model that describes the relationship between eye dipoles and electric potentials. In contrast, our improved approach requires no EOG and less a priori knowledge about the current variance of extra-brain sources. We propose a new method, "extra-dipole," that optimally selects hyper-parameter values regarding current variances of the cortical surface and extra-brain source dipoles. With the selected parameter values, the cortical and extra-brain dipole currents were accurately estimated from the simulated MEG data. The performance of this method was demonstrated to be better than conventional approaches, such as principal component analysis and independent component analysis, which use only statistical properties of MEG signals. Furthermore, we applied our proposed method to measured MEG data during covert pursuit of a smoothly moving target and confirmed its effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa
2018-03-01
Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.
Methods proposed to achieve air quality standards for mobile sources and technology surveillance.
Piver, W T
1975-01-01
The methods proposed to meet the 1975 Standards of the Clean Air Act for mobile sources are alternative antiknocks, exhaust emission control devices, and alternative engine designs. Technology surveillance analysis applied to this situation is an attempt to anticipate potential public and environmental health problems from these methods, before they happen. Components of this analysis are exhaust emission characterization, environmental transport and transformation, levels of public and environmental exposure, and the influence of economics on the selection of alternative methods. The purpose of this presentation is to show trends as a result of the interaction of these different components. In no manner can these trends be interpreted explicitly as to what will really happen. Such an analysis is necessary so that public and environmental health officials have the opportunity to act on potential problems before they become manifest. PMID:50944
[Selective removal of tannins from Polygonum cuspidatum extracts using collagen fiber adsorbent].
Li, Juan; Liao, Xuepin; Shu, Xingxu; Shi, Bi
2010-03-01
To investigate the selective removal of tannins from Polygonum cuspidatum extracts by using collagen fiber adsorbent, and to evaluate the adsorption and desorption performances of collagen fiber adsorbent to tannins. The adsorbent was prepared from bovine skin collagen fiber through crosslinking reaction of glutaraldehyde, and then used for the selective removal of tannins from P. cuspidatum extracts. Gelatin-turbidity method, gelatin-ultraviolet spectrometry method and HPLC were used for detection of tannins in the solutions. Ethanol-water solutions with varying concentration were used to test their desorption ability of tannins in order to choose proper desorption solution. On the basis of batch experimental results, the column adsorption and desorption tests were carried out, by using gelatin-turbidity method for detection of tannins. The collagen fiber adsorbent exhibited excellent adsorption selectivity to tannins. It was found that tannins of P. cuspidatum were completely removed, while nearly no adsorption of active components (resveratrol as representative) was found. Moreover, the collagen fiber adsorbent could be regenerated by using 30% ethanol-water solution and then reused. The collagen fiber adsorbent can be considered as a promising material for selective removal of tannins from P. cuspidatum extracts.
Posterior Predictive Bayesian Phylogenetic Model Selection
Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn
2014-01-01
We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892
NASA Astrophysics Data System (ADS)
Boda, Dezső; Giri, Janhavi; Henderson, Douglas; Eisenberg, Bob; Gillespie, Dirk
2011-02-01
The selectivity filter of the L-type calcium channel works as a Ca2 + binding site with a very large affinity for Ca2 + versus Na+. Ca2 + replaces half of the Na+ ions in the filter even when these ions are present in 1 μM and 30 mM concentrations in the bath, respectively. The energetics of this strong selectivity is analyzed in this paper. We use Widom's particle insertion method to compute the space-dependent profiles of excess chemical potential in our grand canonical Monte Carlo simulations. These profiles define the free-energy landscape for the various ions. Following Gillespie [Biophys. J. 94, 1169 (2008)], the difference of the excess chemical potentials for the two competing ions defines the advantage that one of the ions has over the other in the competition for space in the crowded selectivity filter. These advantages depend on ionic bath concentrations: the ion that is present in the bath in larger quantity (Na+) has the "number" advantage which is balanced by the free-energy advantage of the other ion (Ca2 +). The excess chemical potentials are decomposed into hard sphere exclusion and electrostatic components. The electrostatic terms correspond to interactions with the mean electric field produced by ions and induced charges as well to ionic correlations beyond the mean field description. Dielectrics are needed to produce micromolar Ca2 + versus Na+ selectivity in the L-type channel. We study the behavior of these terms with changes in bath concentrations of ions, charges, and diameters of ions, as well as geometrical parameters such as radius of the pore and the dielectric constant of the protein. Ion selectivity in calcium binding proteins probably has a similar mechanism.
Boda, Dezso; Giri, Janhavi; Henderson, Douglas; Eisenberg, Bob; Gillespie, Dirk
2011-02-07
The selectivity filter of the L-type calcium channel works as a Ca(2+) binding site with a very large affinity for Ca(2+) versus Na(+). Ca(2+) replaces half of the Na(+) ions in the filter even when these ions are present in 1 μM and 30 mM concentrations in the bath, respectively. The energetics of this strong selectivity is analyzed in this paper. We use Widom's particle insertion method to compute the space-dependent profiles of excess chemical potential in our grand canonical Monte Carlo simulations. These profiles define the free-energy landscape for the various ions. Following Gillespie [Biophys. J. 94, 1169 (2008)], the difference of the excess chemical potentials for the two competing ions defines the advantage that one of the ions has over the other in the competition for space in the crowded selectivity filter. These advantages depend on ionic bath concentrations: the ion that is present in the bath in larger quantity (Na(+)) has the "number" advantage which is balanced by the free-energy advantage of the other ion (Ca(2+)). The excess chemical potentials are decomposed into hard sphere exclusion and electrostatic components. The electrostatic terms correspond to interactions with the mean electric field produced by ions and induced charges as well to ionic correlations beyond the mean field description. Dielectrics are needed to produce micromolar Ca(2+) versus Na(+) selectivity in the L-type channel. We study the behavior of these terms with changes in bath concentrations of ions, charges, and diameters of ions, as well as geometrical parameters such as radius of the pore and the dielectric constant of the protein. Ion selectivity in calcium binding proteins probably has a similar mechanism.
Gruen, Dieter M.; Young, Charles E.; Pellin, Michael J.
1989-01-01
A method and apparatus for extracting for quantitative analysis ions of selected atomic components of a sample. A lens system is configured to provide a slowly diminishing field region for a volume containing the selected atomic components, enabling accurate energy analysis of ions generated in the slowly diminishing field region. The lens system also enables focusing on a sample of a charged particle beam, such as an ion beam, along a path length perpendicular to the sample and extraction of the charged particles along a path length also perpendicular to the sample. Improvement of signal to noise ratio is achieved by laser excitation of ions to selected autoionization states before carrying out quantitative analysis. Accurate energy analysis of energetic charged particles is assured by using a preselected resistive thick film configuration disposed on an insulator substrate for generating predetermined electric field boundary conditions to achieve for analysis the required electric field potential. The spectrometer also is applicable in the fields of SIMS, ISS and electron spectroscopy.
Gruen, D.M.; Young, C.E.; Pellin, M.J.
1989-08-08
A method and apparatus are described for extracting for quantitative analysis ions of selected atomic components of a sample. A lens system is configured to provide a slowly diminishing field region for a volume containing the selected atomic components, enabling accurate energy analysis of ions generated in the slowly diminishing field region. The lens system also enables focusing on a sample of a charged particle beam, such as an ion beam, along a path length perpendicular to the sample and extraction of the charged particles along a path length also perpendicular to the sample. Improvement of signal to noise ratio is achieved by laser excitation of ions to selected auto-ionization states before carrying out quantitative analysis. Accurate energy analysis of energetic charged particles is assured by using a preselected resistive thick film configuration disposed on an insulator substrate for generating predetermined electric field boundary conditions to achieve for analysis the required electric field potential. The spectrometer also is applicable in the fields of SIMS, ISS and electron spectroscopy. 8 figs.
Correlation study between vibrational environmental and failure rates of civil helicopter components
NASA Technical Reports Server (NTRS)
Alaniz, O.
1979-01-01
An investigation of two selected helicopter types, namely, the Models 206A/B and 212, is reported. An analysis of the available vibration and reliability data for these two helicopter types resulted in the selection of ten components located in five different areas of the helicopter and consisting primarily of instruments, electrical components, and other noncritical flight hardware. The potential for advanced technology in suppressing vibration in helicopters was assessed. The are still several unknowns concerning both the vibration environment and the reliability of helicopter noncritical flight components. Vibration data for the selected components were either insufficient or inappropriate. The maintenance data examined for the selected components were inappropriate due to variations in failure mode identification, inconsistent reporting, or inaccurate informaton.
NASA Astrophysics Data System (ADS)
Lin, Z. D.; Wang, Y. B.; Wang, R. J.; Wang, L. S.; Lu, C. P.; Zhang, Z. Y.; Song, L. T.; Liu, Y.
2017-07-01
A total of 130 topsoil samples collected from Guoyang County, Anhui Province, China, were used to establish a Vis-NIR model for the prediction of organic matter content (OMC) in lime concretion black soils. Different spectral pretreatments were applied for minimizing the irrelevant and useless information of the spectra and increasing the spectra correlation with the measured values. Subsequently, the Kennard-Stone (KS) method and sample set partitioning based on joint x-y distances (SPXY) were used to select the training set. Successive projection algorithm (SPA) and genetic algorithm (GA) were then applied for wavelength optimization. Finally, the principal component regression (PCR) model was constructed, in which the optimal number of principal components was determined using the leave-one-out cross validation technique. The results show that the combination of the Savitzky-Golay (SG) filter for smoothing and multiplicative scatter correction (MSC) can eliminate the effect of noise and baseline drift; the SPXY method is preferable to KS in the sample selection; both the SPA and the GA can significantly reduce the number of wavelength variables and favorably increase the accuracy, especially GA, which greatly improved the prediction accuracy of soil OMC with Rcc, RMSEP, and RPD up to 0.9316, 0.2142, and 2.3195, respectively.
Cross-validation to select Bayesian hierarchical models in phylogenetics.
Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C
2016-05-26
Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.
USDA-ARS?s Scientific Manuscript database
Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...
NASA Astrophysics Data System (ADS)
Wojciechowski, Adam
2017-04-01
In order to assess ecodiversity understood as a comprehensive natural landscape factor (Jedicke 2001), it is necessary to apply research methods which recognize the environment in a holistic way. Principal component analysis may be considered as one of such methods as it allows to distinguish the main factors determining landscape diversity on the one hand, and enables to discover regularities shaping the relationships between various elements of the environment under study on the other hand. The procedure adopted to assess ecodiversity with the use of principal component analysis involves: a) determining and selecting appropriate factors of the assessed environment qualities (hypsometric, geological, hydrographic, plant, and others); b) calculating the absolute value of individual qualities for the basic areas under analysis (e.g. river length, forest area, altitude differences, etc.); c) principal components analysis and obtaining factor maps (maps of selected components); d) generating a resultant, detailed map and isolating several classes of ecodiversity. An assessment of ecodiversity with the use of principal component analysis was conducted in the test area of 299,67 km2 in Debnica Kaszubska commune. The whole commune is situated in the Weichselian glaciation area of high hypsometric and morphological diversity as well as high geo- and biodiversity. The analysis was based on topographical maps of the commune area in scale 1:25000 and maps of forest habitats. Consequently, nine factors reflecting basic environment elements were calculated: maximum height (m), minimum height (m), average height (m), the length of watercourses (km), the area of water reservoirs (m2), total forest area (ha), coniferous forests habitats area (ha), deciduous forest habitats area (ha), alder habitats area (ha). The values for individual factors were analysed for 358 grid cells of 1 km2. Based on the principal components analysis, four major factors affecting commune ecodiversity were distinguished: hypsometric component (PC1), deciduous forest habitats component (PC2), river valleys and alder habitats component (PC3), and lakes component (PC4). The distinguished factors characterise natural qualities of postglacial area and reflect well the role of the four most important groups of environment components in shaping ecodiversity of the area under study. The map of ecodiversity of Debnica Kaszubska commune was created on the basis of the first four principal component scores and then five classes of diversity were isolated: very low, low, average, high and very high. As a result of the assessment, five commune regions of very high ecodiversity were separated. These regions are also very attractive for tourists and valuable in terms of their rich nature which include protected areas such as Slupia Valley Landscape Park. The suggested method of ecodiversity assessment with the use of principal component analysis may constitute an alternative methodological proposition to other research methods used so far. Literature Jedicke E., 2001. Biodiversität, Geodiversität, Ökodiversität. Kriterien zur Analyse der Landschaftsstruktur - ein konzeptioneller Diskussionsbeitrag. Naturschutz und Landschaftsplanung, 33(2/3), 59-68.
Random ensemble learning for EEG classification.
Hosseini, Mohammad-Parsa; Pompili, Dario; Elisevich, Kost; Soltanian-Zadeh, Hamid
2018-01-01
Real-time detection of seizure activity in epilepsy patients is critical in averting seizure activity and improving patients' quality of life. Accurate evaluation, presurgical assessment, seizure prevention, and emergency alerts all depend on the rapid detection of seizure onset. A new method of feature selection and classification for rapid and precise seizure detection is discussed wherein informative components of electroencephalogram (EEG)-derived data are extracted and an automatic method is presented using infinite independent component analysis (I-ICA) to select independent features. The feature space is divided into subspaces via random selection and multichannel support vector machines (SVMs) are used to classify these subspaces. The result of each classifier is then combined by majority voting to establish the final output. In addition, a random subspace ensemble using a combination of SVM, multilayer perceptron (MLP) neural network and an extended k-nearest neighbors (k-NN), called extended nearest neighbor (ENN), is developed for the EEG and electrocorticography (ECoG) big data problem. To evaluate the solution, a benchmark ECoG of eight patients with temporal and extratemporal epilepsy was implemented in a distributed computing framework as a multitier cloud-computing architecture. Using leave-one-out cross-validation, the accuracy, sensitivity, specificity, and both false positive and false negative ratios of the proposed method were found to be 0.97, 0.98, 0.96, 0.04, and 0.02, respectively. Application of the solution to cases under investigation with ECoG has also been effected to demonstrate its utility. Copyright © 2017 Elsevier B.V. All rights reserved.
Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M
2012-03-01
Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
ERIC Educational Resources Information Center
Piper, Martha K.
Thirty-six students enrolled in an elementary science methods course were randomly selected and given an instrument using Osgood's semantic differential approach the first week of class, the sixth week on campus prior to field experiences, and the thirteenth week following field experiences. The elementary teachers who had observed the university…
Solid lithium ion conducting electrolytes and methods of preparation
Narula, Chaitanya K; Daniel, Claus
2013-05-28
A composition comprised of nanoparticles of lithium ion conducting solid oxide material, wherein the solid oxide material is comprised of lithium ions, and at least one type of metal ion selected from pentavalent metal ions and trivalent lanthanide metal ions. Solution methods useful for synthesizing these solid oxide materials, as well as precursor solutions and components thereof, are also described. The solid oxide materials are incorporated as electrolytes into lithium ion batteries.
Solid lithium ion conducting electrolytes and methods of preparation
Narula, Chaitanya K.; Daniel, Claus
2015-11-19
A composition comprised of nanoparticles of lithium ion conducting solid oxide material, wherein the solid oxide material is comprised of lithium ions, and at least one type of metal ion selected from pentavalent metal ions and trivalent lanthanide metal ions. Solution methods useful for synthesizing these solid oxide materials, as well as precursor solutions and components thereof, are also described. The solid oxide materials are incorporated as electrolytes into lithium ion batteries.
Plugging micro-leaks in multi-component, ceramic tubesheets with material leached therefrom
Bieler, B.H.; Tsang, F.Y.
1985-03-19
Cracks, in ceramic wall members, on the order of 1 micron or less in width are plugged helium-tight by selectively leaching a component of the wall member with a solvent, letting the resultant leach form a liquid bridge within the crack, removing the solvent and sintering the resultant residue. This method is of particular value for remedying microcracks or channels in a cell member constituting a tubesheet in a hollow fiber type, high temperature battery cell, such as a sodium/sulfur cell, for example. 1 fig.
Plugging micro-leaks in multi-component, ceramic tubesheets with material leached therefrom
Bieler, Barrie H.; Tsang, Floris Y.
1985-03-19
Cracks, in ceramic wall members, on the order of 1 micron or less in width are plugged helium-tight by selectively leaching a component of the wall member with a solvent, letting the resultant leach form a liquid bridge within the crack, removing the solvent and sintering the resultant residue. This method is of particular value for remedying microcracks or channels in a cell member constituting a tubesheet in a hollow fiber type, high temperature battery cell, such as a sodium/sulfur cell, for example.
Newton, Paul; Chandler, Val; Morris-Thomson, Trish; Sayer, Jane; Burke, Linda
2015-01-01
To map current selection and recruitment processes for newly qualified nurses and to explore the advantages and limitations of current selection and recruitment processes. The need to improve current selection and recruitment practices for newly qualified nurses is highlighted in health policy internationally. A cross-sectional, sequential-explanatory mixed-method design with 4 components: (1) Literature review of selection and recruitment of newly qualified nurses; and (2) Literature review of a public sector professions' selection and recruitment processes; (3) Survey mapping existing selection and recruitment processes for newly qualified nurses; and (4) Qualitative study about recruiters' selection and recruitment processes. Literature searches on the selection and recruitment of newly qualified candidates in teaching and nursing (2005-2013) were conducted. Cross-sectional, mixed-method data were collected from thirty-one (n = 31) individuals in health providers in London who had responsibility for the selection and recruitment of newly qualified nurses using a survey instrument. Of these providers who took part, six (n = 6) purposively selected to be interviewed qualitatively. Issues of supply and demand in the workforce, rather than selection and recruitment tools, predominated in the literature reviews. Examples of tools to measure values, attitudes and skills were found in the nursing literature. The mapping exercise found that providers used many selection and recruitment tools, some providers combined tools to streamline process and assure quality of candidates. Most providers had processes which addressed the issue of quality in the selection and recruitment of newly qualified nurses. The 'assessment centre model', which providers were adopting, allowed for multiple levels of assessment and streamlined recruitment. There is a need to validate the efficacy of the selection tools. © 2014 John Wiley & Sons Ltd.
Ding, Jun; Xiao, Hua-Ming; Liu, Simin; Wang, Chang; Liu, Xin; Feng, Yu-Qi
2018-10-05
Although several methods have realized the analysis of low molecular weight (LMW) compounds using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) by overcoming the problem of interference with MS signals in the low mass region derived from conventional organic matrices, this emerging field still requires strategies to address the issue of analyzing complex samples containing LMW components in addition to the LMW compounds of interest, and solve the problem of lack of universality. The present study proposes an integrated strategy that combines chemical labeling with the supramolecular chemistry of cucurbit [n]uril (CB [n]) for the MALDI MS analysis of LMW compounds in complex samples. In this strategy, the target LMW compounds are first labeled by introducing a series of bifunctional reagents that selectively react with the target analytes and also form stable inclusion complexes with CB [n]. Then, the labeled products act as guest molecules that readily and selectively form stable inclusion complexes with CB [n]. This strategy relocates the MS signals of the LMW compounds of interest from the low mass region suffering high interference to the high mass region where interference with low mass components is absent. Experimental results demonstrate that a wide range of LMW compounds, including carboxylic acids, aldehydes, amines, thiol, and cis-diols, can be successfully detected using the proposed strategy, and the limits of detection were in the range of 0.01-1.76 nmol/mL. In addition, the high selectivity of the labeling reagents for the target analytes in conjunction with the high selectivity of the binding between the labeled products and CB [n] ensures an absence of signal interference with the non-targeted LMW components of complex samples. Finally, the feasibility of the proposed strategy for complex sample analysis is demonstrated by the accurate and rapid quantitative analysis of aldehydes in saliva and herbal medicines. As such, this work not only provides an alternative method for the detection of various LMW compounds using MALDI MS, but also can be applied to the selective and high-throughput analysis of LMW analytes in complex samples. Copyright © 2018 Elsevier B.V. All rights reserved.
Development of an alkaline fuel cell subsystem
NASA Technical Reports Server (NTRS)
1987-01-01
A two task program was initiated to develop advanced fuel cell components which could be assembled into an alkaline power section for the Space Station Prototype (SSP) fuel cell subsystem. The first task was to establish a preliminary SSP power section design to be representative of the 200 cell Space Station power section. The second task was to conduct tooling and fabrication trials and fabrication of selected cell stack components. A lightweight, reliable cell stack design suitable for the SSP regenerative fuel cell power plant was completed. The design meets NASA's preliminary requirements for future multikilowatt Space Station missions. Cell stack component fabrication and tooling trials demonstrated cell components of the SSP stack design of the 1.0 sq ft area can be manufactured using techniques and methods previously evaluated and developed.
Fitzpatrick, John L; Simmons, Leigh W; Evans, Jonathan P
2012-08-01
Assessing how selection operates on several, potentially interacting, components of the ejaculate is a challenging endeavor. Ejaculates can be subject to natural and/or sexual selection, which can impose both linear (directional) and nonlinear (stabilizing, disruptive, and correlational) selection on different ejaculate components. Most previous studies have examined linear selection of ejaculate components and, consequently, we know very little about patterns of nonlinear selection on the ejaculate. Even less is known about how selection acts on the ejaculate as a functionally integrated unit, despite evidence of covariance among ejaculate components. Here, we assess how selection acts on multiple ejaculate components simultaneously in the broadcast spawning sessile invertebrate Mytilus galloprovincialis using the statistical tools of multivariate selection analyses. Our analyses of relative fertilization rates revealed complex patterns of selection on sperm velocity, motility, and morphology. Interestingly, the most successful ejaculates were made up of slower swimming sperm with relatively low percentages of motile cells, and sperm with smaller head volumes that swam in highly pronounced curved swimming trajectories. These results are consistent with an emerging body of literature on fertilization kinetics in broadcast spawners, and shed light on the fundamental nature of selection acting on the ejaculate as a functionally integrated unit. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
Robust estimation for partially linear models with large-dimensional covariates
Zhu, LiPing; Li, RunZe; Cui, HengJian
2014-01-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087
Robust estimation for partially linear models with large-dimensional covariates.
Zhu, LiPing; Li, RunZe; Cui, HengJian
2013-10-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.
A method for evaluating the funding of components of natural resource and conservation projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wellington, John F., E-mail: welling@ipfw.edu; Lewis, Stephen A., E-mail: lewis.sa07@gmail.com
Many public and private entities such as government agencies and private foundations have missions related to the improvement, protection, and sustainability of the environment. In pursuit of their missions, they fund projects with related outcomes. Typically, the funding scene consists of scarce funding dollars for the many project requests. In light of funding limitations and funder's search for innovative funding schemes, a method to support the allocation of scarce dollars among project components is presented. The proposed scheme has similarities to methods in the project selection literature but differs in its focus on project components and its connection to andmore » enumeration of the universe of funding possibilities. The value of having access to the universe is demonstrated with illustrations. The presentation includes Excel implementations that should appeal to a broad spectrum of project evaluators and reviewers. Access to the space of funding possibilities facilitates a rich analysis of funding alternatives. - Highlights: • Method is given for allocating scarce funding dollars among competing projects. • Allocations are made to fund parts of projects • Proposed method provides access to the universe of funding possibilities. • Proposed method facilitates a rich analysis of funding possibilities. • Excel spreadsheet implementations are provided.« less
High-Performance Composite Chocolate
ERIC Educational Resources Information Center
Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell
2013-01-01
The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with…
Fine phenotyping of pod and seed traits in Arachis germplasm accessions using digital image analysis
USDA-ARS?s Scientific Manuscript database
Reliable and objective phenotyping of peanut pod and seed traits is important for cultivar selection and genetic mapping of yield components. To develop useful and efficient methods to quantitatively define peanut pod and seed traits, a group of peanut germplasm with high levels of phenotypic varia...
Fluid technology (selected components, devices, and systems): A compilation
NASA Technical Reports Server (NTRS)
1974-01-01
Developments in fluid technology and hydraulic equipment are presented. The subjects considered are: (1) the use of fluids in the operation of switches, amplifiers, and servo devices, (2) devices and data for laboratory use in the study of fluid dynamics, and (3) the use of fluids as controls and certain methods of controlling fluids.
Health Teachers' Perceptions and Teaching Practices Regarding Hearing Loss Conservation
ERIC Educational Resources Information Center
Thompson, Amy; Pakulski, Lori; Price, James; Kleinfelder, Joanne
2013-01-01
Background: Limited research has examined the role of school health personnel in the prevention and early identification of hearing impairment. Purpose: This study assessed high school health teachers' perceptions and teaching practices regarding hearing loss conservation. Methods: A 26-item survey based on selected components of the health…
Comparative Effects of Seven Verbal-Visual Presentation Modes Upon Learning Tasks.
ERIC Educational Resources Information Center
Russell, Josiah Johnson, IV
A study was made of the comparative media effects upon teaching the component learning tasks of concept learning: classification, generalization, and application. The seven selected methods of presenting stimuli to the learners were: motion pictures with spoken verbal; motion pictures, silent; still pictures with spoken verbal; still pictures,…
ERIC Educational Resources Information Center
Rinehart, Steven D.; Ahern, Terence C.
2016-01-01
Computer applications related to reading instruction have become commonplace in schools and link with established components of the reading process, emergent skills, decoding, comprehension, vocabulary, and fluency. This article focuses on computer technology in conjunction with durable methods for building oral reading fluency when readers…
Method for inhibiting alkali metal corrosion of nickel-containing alloys
DeVan, Jackson H.; Selle, James E.
1983-01-01
Structural components of nickel-containing alloys within molten alkali metal systems are protected against corrosion during the course of service by dissolving therein sufficient aluminum, silicon, or manganese to cause the formation and maintenance of a corrosion-resistant intermetallic reaction layer created by the interaction of the molten metal, selected metal, and alloy.
Synthesis of triazole based unnatural amino acids and β-amino triazole has been described via stereo and regioselective one-pot multi-component reaction of sulfamidates, sodium azide, and alkynes under MW conditions. The developed method is applicable to a broad substrate scope a...
Apparatus For Tests Of Percussion Primers
NASA Technical Reports Server (NTRS)
Bement, Laurence J.; Bailey, James W.; Schimmel, Morry L.
1991-01-01
Test apparatus and method developed to measure ignition capability of percussion primers. Closely simulates actual conditions and interfaces encountered in such applications as in munitions and rocket motors. Ignitability-testing apparatus is small bomb instrumented with pressure transducers. Sizes, shapes, and positions of bomb components and materials under test selected to obtain quantitative data on ignition.
NASA Astrophysics Data System (ADS)
Díaz-Ayil, Gilberto; Amouroux, Marine; Clanché, Fabien; Granjon, Yves; Blondel, Walter C. P. M.
2009-07-01
Spatially-resolved bimodal spectroscopy (multiple AutoFluorescence AF excitation and Diffuse Reflectance DR), was used in vivo to discriminate various healthy and precancerous skin stages in a pre-clinical model (UV-irradiated mouse): Compensatory Hyperplasia CH, Atypical Hyperplasia AH and Dysplasia D. A specific data preprocessing scheme was applied to intensity spectra (filtering, spectral correction and intensity normalization), and several sets of spectral characteristics were automatically extracted and selected based on their discrimination power, statistically tested for every pair-wise comparison of histological classes. Data reduction with Principal Components Analysis (PCA) was performed and 3 classification methods were implemented (k-NN, LDA and SVM), in order to compare diagnostic performance of each method. Diagnostic performance was studied and assessed in terms of Sensibility (Se) and Specificity (Sp) as a function of the selected features, of the combinations of 3 different inter-fibres distances and of the numbers of principal components, such that: Se and Sp ~ 100% when discriminating CH vs. others; Sp ~ 100% and Se > 95% when discriminating Healthy vs. AH or D; Sp ~ 74% and Se ~ 63% for AH vs. D.
Kharroubi, Adel; Gargouri, Dorra; Baati, Houda; Azri, Chafai
2012-06-01
Concentrations of selected heavy metals (Cd, Pb, Zn, Cu, Mn, and Fe) in surface sediments from 66 sites in both northern and eastern Mediterranean Sea-Boughrara lagoon exchange areas (southeastern Tunisia) were studied in order to understand current metal contamination due to the urbanization and economic development of nearby several coastal regions of the Gulf of Gabès. Multiple approaches were applied for the sediment quality assessment. These approaches were based on GIS coupled with chemometric methods (enrichment factors, geoaccumulation index, principal component analysis, and cluster analysis). Enrichment factors and principal component analysis revealed two distinct groups of metals. The first group corresponded to Fe and Mn derived from natural sources, and the second group contained Cd, Pb, Zn, and Cu originated from man-made sources. For these latter metals, cluster analysis showed two distinct distributions in the selected areas. They were attributed to temporal and spatial variations of contaminant sources input. The geoaccumulation index (I (geo)) values explained that only Cd, Pb, and Cu can be considered as moderate to extreme pollutants in the studied sediments.
NASA Astrophysics Data System (ADS)
Zhang, Xuefeng; Liu, Bo; Wang, Jieqiong; Zhang, Zhe; Shi, Kaibo; Wu, Shuanglin
2014-08-01
Commonly used petrological quantification methods are visual estimation, counting, and image analyses. However, in this article, an Adobe Photoshop-based analyzing method (PSQ) is recommended for quantifying the rock textural data and porosities. Adobe Photoshop system provides versatile abilities in selecting an area of interest and the pixel number of a selection could be read and used to calculate its area percentage. Therefore, Adobe Photoshop could be used to rapidly quantify textural components, such as content of grains, cements, and porosities including total porosities and different genetic type porosities. This method was named as Adobe Photoshop Quantification (PSQ). The workflow of the PSQ method was introduced with the oolitic dolomite samples from the Triassic Feixianguan Formation, Northeastern Sichuan Basin, China, for example. And the method was tested by comparing with the Folk's and Shvetsov's "standard" diagrams. In both cases, there is a close agreement between the "standard" percentages and those determined by the PSQ method with really small counting errors and operator errors, small standard deviations and high confidence levels. The porosities quantified by PSQ were evaluated against those determined by the whole rock helium gas expansion method to test the specimen errors. Results have shown that the porosities quantified by the PSQ are well correlated to the porosities determined by the conventional helium gas expansion method. Generally small discrepancies (mostly ranging from -3% to 3%) are caused by microporosities which would cause systematic underestimation of 2% and/or by macroporosities causing underestimation or overestimation in different cases. Adobe Photoshop could be used to quantify rock textural components and porosities. This method has been tested to be precise and accurate. It is time saving compared with usual methods.
Diffusion barriers in modified air brazes
Weil, Kenneth Scott; Hardy, John S; Kim, Jin Yong; Choi, Jung-Pyung
2013-04-23
A method for joining two ceramic parts, or a ceramic part and a metal part, and the joint formed thereby. The method provides two or more parts, a braze consisting of a mixture of copper oxide and silver, a diffusion barrier, and then heats the braze for a time and at a temperature sufficient to form the braze into a bond holding the two or more parts together. The diffusion barrier is an oxidizable metal that forms either a homogeneous component of the braze, a heterogeneous component of the braze, a separate layer bordering the braze, or combinations thereof. The oxidizable metal is selected from the group Al, Mg, Cr, Si, Ni, Co, Mn, Ti, Zr, Hf, Pt, Pd, Au, lanthanides, and combinations thereof.
Diffusion barriers in modified air brazes
Weil, Kenneth Scott [Richland, WA; Hardy, John S [Richland, WA; Kim, Jin Yong [Richland, WA; Choi, Jung-Pyung [Richland, WA
2010-04-06
A method for joining two ceramic parts, or a ceramic part and a metal part, and the joint formed thereby. The method provides two or more parts, a braze consisting of a mixture of copper oxide and silver, a diffusion barrier, and then heats the braze for a time and at a temperature sufficient to form the braze into a bond holding the two or more parts together. The diffusion barrier is an oxidizable metal that forms either a homogeneous component of the braze, a heterogeneous component of the braze, a separate layer bordering the braze, or combinations thereof. The oxidizable metal is selected from the group Al, Mg, Cr, Si, Ni, Co, Mn, Ti, Zr, Hf, Pt, Pd, Au, lanthanides, and combinations thereof.
Improving human activity recognition and its application in early stroke diagnosis.
Villar, José R; González, Silvia; Sedano, Javier; Chira, Camelia; Trejo-Gabriel-Galan, Jose M
2015-06-01
The development of efficient stroke-detection methods is of significant importance in today's society due to the effects and impact of stroke on health and economy worldwide. This study focuses on Human Activity Recognition (HAR), which is a key component in developing an early stroke-diagnosis tool. An overview of the proposed global approach able to discriminate normal resting from stroke-related paralysis is detailed. The main contributions include an extension of the Genetic Fuzzy Finite State Machine (GFFSM) method and a new hybrid feature selection (FS) algorithm involving Principal Component Analysis (PCA) and a voting scheme putting the cross-validation results together. Experimental results show that the proposed approach is a well-performing HAR tool that can be successfully embedded in devices.
Modality-specificity of Selective Attention Networks
Stewart, Hannah J.; Amitay, Sygal
2015-01-01
Objective: To establish the modality specificity and generality of selective attention networks. Method: Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. Results: The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled “general attention.” The third component was labeled “auditory attention,” as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as “spatial orienting” and “spatial conflict,” respectively—they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task—all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). Conclusions: These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific. PMID:26635709
Significant components of service brand equity in healthcare sector.
Chahal, Hardeep; Bala, Madhu
2012-01-01
The purpose of the study is to examine three significant components of service brand equity--i.e. perceived service quality, brand loyalty, and brand image--and analyze relationships among the components of brand equity and also their relationship with brand equity, which is still to be theorized and developed in the healthcare literature. Effective responses were received from 206 respondents, selected conveniently from the localities of Jammu city. After scale item analysis, the data were analyzed using factor analysis, correlations, t-tests, multiple regression analysis and path modeling using SEM. The findings of the study support that service brand equity in the healthcare sector is greatly influenced by brand loyalty and perceived quality. However, brand image has an indirect effect on service brand equity through brand loyalty (mediating variable). The research can be criticized on the ground that data were selected conveniently from respondents residing in the city of Jammu, India. But at the same time the respondents were appropriate for the study as they have adequate knowledge about the hospitals, and were associated with the selected hospital for more than four years. Furthermore, the validity and reliability of the data are strong enough to take care of the limitations of the convenience sampling selection method. The study has unique value addition to the service marketing vis-à-vis healthcare literature, from both theoretical and managerial perspectives. The study establishes a direct and significant relationship between service brand equity and its two components, i.e. perceived service quality and brand loyalty in the healthcare sector. It also provides directions to healthcare service providers in creating, enhancing, and maintaining service brand equity through service quality and brand loyalty, to sustain competitive advantage.
Campanelli, Sabina L.; Contuzzi, Nicola; Ludovico, Antonio D.; Caiazzo, Fabrizia; Cardaropoli, Francesco; Sergi, Vincenzo
2014-01-01
The paper investigates the fabrication of Selective Laser Melting (SLM) titanium alloy Ti6Al4V micro-lattice structures for the production of lightweight components. Specifically, the pillar textile unit cell is used as base lattice structure and alternative lattice topologies including reinforcing vertical bars are also considered. Detailed characterizations of dimensional accuracy, surface roughness, and micro-hardness are performed. In addition, compression tests are carried out in order to evaluate the mechanical strength and the energy absorbed per unit mass of the lattice truss specimens made by SLM. The built structures have a relative density ranging between 0.2234 and 0.5822. An optimization procedure is implemented via the method of Taguchi to identify the optimal geometric configuration which maximizes peak strength and energy absorbed per unit mass. PMID:28788707
High efficiency direct detection of ions from resonance ionization of sputtered atoms
Gruen, Dieter M.; Pellin, Michael J.; Young, Charles E.
1986-01-01
A method and apparatus are provided for trace and other quantitative analysis with high efficiency of a component in a sample, with the analysis involving the removal by ion or other bombardment of a small quantity of ion and neutral atom groups from the sample, the conversion of selected neutral atom groups to photoions by laser initiated resonance ionization spectroscopy, the selective deflection of the photoions for separation from original ion group emanating from the sample, and the detection of the photoions as a measure of the quantity of the component. In some embodiments, the original ion group is accelerated prior to the RIS step for separation purposes. Noise and other interference are reduced by shielding the detector from primary and secondary ions and deflecting the photoions sufficiently to avoid the primary and secondary ions.
High efficiency direct detection of ions from resonance ionization of sputtered atoms
Gruen, D.M.; Pellin, M.J.; Young, C.E.
1985-01-16
A method and apparatus are provided for trace and other quantitative analysis with high efficiency of a component in a sample, with the analysis involving the removal by ion or other bombardment of a small quantity of ion and neutral atom groups from the sample, the conversion of selected neutral atom groups to photoions by laser initiated resonance ionization spectroscopy, the selective deflection of the photoions for separation from original ion group emanating from the sample, and the detection of the photoions as a measure of the quantity of the component. In some embodiments, the original ion group is accelerated prior to the RIS step for separation purposes. Noise and other interference are reduced by shielding the detector from primary and secondary ions and deflecting the photoions sufficiently to avoid the primary and secondary ions.
Additive Manufacturing Technology for Biomedical Components: A review
NASA Astrophysics Data System (ADS)
Aimi Zaharin, Haizum; Rani, Ahmad Majdi Abdul; Lenggo Ginta, Turnad; Azam, Farooq I.
2018-03-01
Over the last decades, additive manufacturing has shown potential application in ranging fields. No longer a prototyping technology, it is now being utilised as a manufacturing technology for giant industries such as the automotive, aircraft and recently in the medical industry. It is a very successful method that provides health-care solution in biomedical sectors by producing patient-specific prosthetics, improve tissues engineering and facilitate pre-operating session. This paper thus presents a brief overview of the most commercially important additive manufacturing technologies, which is currently available for fabricating biomedical components such as Stereolithography (SLA), Selective Laser Sintering (SLS), Selective Laser Melting (SLM), Fused Deposition Modelling (FDM) and Electron Beam Melting (EBM). It introduces the basic principles of the main process, highlights some of the beneficial applications in medical industry and the current limitation of applied technology.
Ultrashort Echo Time and Zero Echo Time MRI at 7T
Larson, Peder E. Z.; Han, Misung; Krug, Roland; Jakary, Angela; Nelson, Sarah J.; Vigneron, Daniel B.; Henry, Roland G.; McKinnon, Graeme; Kelley, Douglas A. C.
2016-01-01
Object Zero echo time (ZTE) and ultrashort echo time (UTE) pulse sequences for MRI offer unique advantages of being able to detect signal from rapidly decaying short-T2 tissue components. In this paper, we applied 3D zero echo time (ZTE) and ultrashort echo time (UTE) pulse sequences at 7T to assess differences between these methods. Materials and Methods We matched the ZTE and UTE pulse sequences closely in terms of readout trajectories and image contrast. Our ZTE used the Water- and fat-suppressed solid-state proton projection imaging (WASPI) method to fill the center of k-space. Images from healthy volunteers obtained at 7T were compared qualitatively as well as with SNR and CNR measurements for various ultrashort, short, and long-T2 tissues. Results We measured nearly identical contrast-to-noise and signal-to-noise ratios (CNR/SNR) in similar scan times between the two approaches for ultrashort, short, and long-T2 components in the brain, knee and ankle. In our protocol, we observed gradient fidelity artifacts in UTE, and our chosen flip angle and readout also resulted as well as shading artifacts in ZTE due to inadvertent spatial selectivity. These can be corrected by advanced reconstruction methods or with different chosen protocol parameters. Conclusion The applied ZTE and UTE pulse sequences achieved similar contrast and SNR efficiency for volumetric imaging of ultrashort-T2 components. Several key differences are that ZTE is limited to volumetric imaging but has substantially reduced acoustic noise levels during the scan. Meanwhile, UTE has higher acoustic noise levels and greater sensitivity to gradient fidelity, but offers more flexibility in image contrast and volume selection. PMID:26702940
Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao
2012-01-01
Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6∼8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3∼5 pattern classes considering the trade-off between time consumption and classification rate. PMID:22736979
Plante, Elena; Doubleday, Kevin
2017-01-01
Purpose The first goal of this research was to compare verbal and nonverbal executive function abilities between preschoolers with and without specific language impairment (SLI). The second goal was to assess the group differences on 4 executive function components in order to determine if the components may be hierarchically related as suggested within a developmental integrative framework of executive function. Method This study included 26 4- and 5-year-olds diagnosed with SLI and 26 typically developing age- and sex-matched peers. Participants were tested on verbal and nonverbal measures of sustained selective attention, working memory, inhibition, and shifting. Results The SLI group performed worse compared with typically developing children on both verbal and nonverbal measures of sustained selective attention and working memory, the verbal inhibition task, and the nonverbal shifting task. Comparisons of standardized group differences between executive function measures revealed a linear increase with the following order: working memory, inhibition, shifting, and sustained selective attention. Conclusion The pattern of results suggests that preschoolers with SLI have deficits in executive functioning compared with typical peers, and deficits are not limited to verbal tasks. A significant linear relationship between group differences across executive function components supports the possibility of a hierarchical relationship between executive function skills. PMID:28724132
Improved Small Baseline processing by means of CAESAR eigen-interferograms decomposition
NASA Astrophysics Data System (ADS)
Verde, Simona; Reale, Diego; Pauciullo, Antonio; Fornaro, Gianfranco
2018-05-01
The Component extrAction and sElection SAR (CAESAR) is a method for the selection and filtering of scattering mechanisms recently proposed in the multibaseline interferometric SAR framework. Its strength is related to the possibility to select and extract multiple dominant scattering mechanisms, even interfering in the same pixel, since the stage of the interferograms generation, and to carry out a decorrelation noise phase filtering. Up to now, the validation of CAESAR has been addressed in the framework of SAR Tomography for the model-based detection of Persistent Scatterers (PSs). In this paper we investigate the effectiveness related to the use of CAESAR eigen-interferograms in classical multi-baseline DInSAR processing, based on the Small BAseline Subset (SBAS) strategy, typically adopted to extract large scale distributed deformation and atmospheric phase screen. Such components are also exploited for the calibration of the full resolution data for PS or tomographic analysis. By using COSMO-SKyMed (CSK) SAR data, it is demonstrated that dominant scattering component filtering effectively improves the monitoring of distributed spatially decorrelated areas (f.i. bare soil, rocks, etc.) and allows bringing to light man-made structures with dominant backscattering characteristics embedded in highly temporally decorrelated scenario, as isolated asphalt roads and block of buildings in non-urban areas. Moreover it is shown that, thanks to the CAESAR multiple scattering components separation, the layover mitigation in low-topography eigen-interferograms relieves Phase Unwrapping (PhU) errors in urban areas due to abrupt height variations.
The Researches on Damage Detection Method for Truss Structures
NASA Astrophysics Data System (ADS)
Wang, Meng Hong; Cao, Xiao Nan
2018-06-01
This paper presents an effective method to detect damage in truss structures. Numerical simulation and experimental analysis were carried out on a damaged truss structure under instantaneous excitation. The ideal excitation point and appropriate hammering method were determined to extract time domain signals under two working conditions. The frequency response function and principal component analysis were used for data processing, and the angle between the frequency response function vectors was selected as a damage index to ascertain the location of a damaged bar in the truss structure. In the numerical simulation, the time domain signal of all nodes was extracted to determine the location of the damaged bar. In the experimental analysis, the time domain signal of a portion of the nodes was extracted on the basis of an optimal sensor placement method based on the node strain energy coefficient. The results of the numerical simulation and experimental analysis showed that the damage detection method based on the frequency response function and principal component analysis could locate the damaged bar accurately.
Li, Ke; Liu, Yi; Wang, Quanxin; Wu, Yalei; Song, Shimin; Sun, Yi; Liu, Tengchong; Wang, Jun; Li, Yang; Du, Shaoyi
2015-01-01
This paper proposes a novel multi-label classification method for resolving the spacecraft electrical characteristics problems which involve many unlabeled test data processing, high-dimensional features, long computing time and identification of slow rate. Firstly, both the fuzzy c-means (FCM) offline clustering and the principal component feature extraction algorithms are applied for the feature selection process. Secondly, the approximate weighted proximal support vector machine (WPSVM) online classification algorithms is used to reduce the feature dimension and further improve the rate of recognition for electrical characteristics spacecraft. Finally, the data capture contribution method by using thresholds is proposed to guarantee the validity and consistency of the data selection. The experimental results indicate that the method proposed can obtain better data features of the spacecraft electrical characteristics, improve the accuracy of identification and shorten the computing time effectively. PMID:26544549
NASA Astrophysics Data System (ADS)
Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng
2018-02-01
A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.
Method and apparatus for cutting and abrading with sublimable particles
Bingham, D.N.
1995-10-10
A gas delivery system provides a first gas as a liquid under extreme pressure and as a gas under intermediate pressure. Another gas delivery system provides a second gas under moderate pressure. The second gas is selected to solidify at a temperature at or above the temperature of the liquefied gas. A nozzle assembly connected to the gas delivery systems produces a stream containing a liquid component, a solid component, and a gas component. The liquid component of the stream consists of a high velocity jet of the liquefied first gas. The high velocity jet is surrounded by a particle sheath that consists of solid particles of the second gas which solidifies in the nozzle upon contact with the liquefied gas of the high velocity jet. The gas component of the stream is a high velocity flow of the first gas that encircles the particle sheath, forming an outer jacket. 6 figs.
Method and apparatus for cutting and abrading with sublimable particles
Bingham, Dennis N.
1995-01-01
A gas delivery system provides a first gas as a liquid under extreme pressure and as a gas under intermediate pressure. Another gas delivery system provides a second gas under moderate pressure. The second gas is selected to solidify at a temperature at or above the temperature of the liquified gas. A nozzle assembly connected to the gas delivery systems produces a stream containing a liquid component, a solid component, and a gas component. The liquid component of the stream consists of a high velocity jet of the liquified first gas. The high velocity jet is surrounded by a particle sheath that consists of solid particles of the second gas which solidifies in the nozzle upon contact with the liquified gas of the high velocity jet. The gas component of the stream is a high velocity flow of the first gas that encircles the particle sheath, forming an outer jacket.
Raks, Victoria; Al-Suod, Hossam; Buszewski, Bogusław
2018-01-01
Development of efficient methods for isolation and separation of biologically active compounds remains an important challenge for researchers. Designing systems such as organomineral composite materials that allow extraction of a wide range of biologically active compounds, acting as broad-utility solid-phase extraction agents, remains an important and necessary task. Selective sorbents can be easily used for highly selective and reliable extraction of specific components present in complex matrices. Herein, state-of-the-art approaches for selective isolation, preconcentration, and separation of biologically active compounds from a range of matrices are discussed. Primary focus is given to novel extraction methods for some biologically active compounds including cyclic polyols, flavonoids, and oligosaccharides from plants. In addition, application of silica-, carbon-, and polymer-based solid-phase extraction adsorbents and membrane extraction for selective separation of these compounds is discussed. Potential separation process interactions are recommended; their understanding is of utmost importance for the creation of optimal conditions to extract biologically active compounds including those with estrogenic properties.
Variable Selection for Regression Models of Percentile Flows
NASA Astrophysics Data System (ADS)
Fouad, G.
2017-12-01
Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high degree of multicollinearity, possibly illustrating the co-evolution of climatic and physiographic conditions. Given the ineffectiveness of many variables used here, future work should develop new variables that target specific processes associated with percentile flows.
Direct Visualization of Short Transverse Relaxation Time Component (ViSTa)
Oh, Se-Hong; Bilello, Michel; Schindler, Matthew; Markowitz, Clyde E.; Detre, John A.; Lee, Jongho
2013-01-01
White matter of the brain has been demonstrated to have multiple relaxation components. Among them, the short transverse relaxation time component (T2 < 40 ms; T2* < 25 ms at 3T) has been suggested to originate from myelin water whereas long transverse relaxation time components have been associated with axonal and/or interstitial water. In myelin water imaging, T2 or T2* signal decay is measured to estimate myelin water fraction based on T2 or T2* differences among the water components. This method has been demonstrated to be sensitive to demyelination in the brain but suffers from low SNR and image artifacts originating from ill-conditioned multi-exponential fitting. In this study, a novel approach that selectively acquires short transverse relaxation time signal is proposed. The method utilizes a double inversion RF pair to suppress a range of long T1 signal. This suppression leaves short T2* signal, which has been suggested to have short T1, as the primary source of the image. The experimental results confirms that after suppression of long T1 signals, the image is dominated by short T2* in the range of myelin water, allowing us to directly visualize the short transverse relaxation time component in the brain. Compared to conventional myelin water imaging, this new method of direct visualization of short relaxation time component (ViSTa) provides high quality images. When applied to multiple sclerosis patients, chronic lesions show significantly reduced signal intensity in ViSTa images suggesting sensitivity to demyelination. PMID:23796545
Optical monitor for water vapor concentration
Kebabian, Paul
1998-01-01
A system for measuring and monitoring water vapor concentration in a sample uses as a light source an argon discharge lamp, which inherently emits light with a spectral line that is close to a water vapor absorption line. In a preferred embodiment, the argon line is split by a magnetic field parallel to the direction of light propagation from the lamp into sets of components of downshifted and upshifted frequencies of approximately 1575 Gauss. The downshifted components are centered on a water vapor absorption line and are thus readily absorbed by water vapor in the sample; the upshifted components are moved away from that absorption line and are minimally absorbed. A polarization modulator alternately selects the upshifted components or downshifted components and passes the selected components to the sample. After transmission through the sample, the transmitted intensity of a component of the argon line varies as a result of absorption by the water vapor. The system then determines the concentration of water vapor in the sample based on differences in the transmitted intensity between the two sets of components. In alternative embodiments alternate selection of sets of components is achieved by selectively reversing the polarity of the magnetic field or by selectively supplying the magnetic field to the emitting plasma.
Optical monitor for water vapor concentration
Kebabian, P.
1998-06-02
A system for measuring and monitoring water vapor concentration in a sample uses as a light source an argon discharge lamp, which inherently emits light with a spectral line that is close to a water vapor absorption line. In a preferred embodiment, the argon line is split by a magnetic field parallel to the direction of light propagation from the lamp into sets of components of downshifted and upshifted frequencies of approximately 1575 Gauss. The downshifted components are centered on a water vapor absorption line and are thus readily absorbed by water vapor in the sample; the upshifted components are moved away from that absorption line and are minimally absorbed. A polarization modulator alternately selects the upshifted components or downshifted components and passes the selected components to the sample. After transmission through the sample, the transmitted intensity of a component of the argon line varies as a result of absorption by the water vapor. The system then determines the concentration of water vapor in the sample based on differences in the transmitted intensity between the two sets of components. In alternative embodiments alternate selection of sets of components is achieved by selectively reversing the polarity of the magnetic field or by selectively supplying the magnetic field to the emitting plasma. 5 figs.
NASA Astrophysics Data System (ADS)
Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei
2017-07-01
In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.
EEG artifact elimination by extraction of ICA-component features using image processing algorithms.
Radüntz, T; Scouten, J; Hochmuth, O; Meffert, B
2015-03-30
Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Effective integrated frameworks for assessing mining sustainability.
Virgone, K M; Ramirez-Andreotta, M; Mainhagu, J; Brusseau, M L
2018-05-28
The objectives of this research are to review existing methods used for assessing mining sustainability, analyze the limited prior research that has evaluated the methods, and identify key characteristics that would constitute an enhanced sustainability framework that would serve to improve sustainability reporting in the mining industry. Five of the most relevant frameworks were selected for comparison in this analysis, and the results show that there are many commonalities among the five, as well as some disparities. In addition, relevant components are missing from all five. An enhanced evaluation system and framework were created to provide a more holistic, comprehensive method for sustainability assessment and reporting. The proposed framework has five components that build from and encompass the twelve evaluation characteristics used in the analysis. The components include Foundation, Focus, Breadth, Quality Assurance, and Relevance. The enhanced framework promotes a comprehensive, location-specific reporting approach with a concise set of well-defined indicators. Built into the framework is quality assurance, as well as a defined method to use information from sustainability reports to inform decisions. The framework incorporates human health and socioeconomic aspects via initiatives such as community-engaged research, economic valuations, and community-initiated environmental monitoring.
Zhao, Yan; Qu, Hui-Hua; Wang, Qing-Guo
2013-09-01
Study on pharmacodynamic material basis of traditional Chinese medicines is one of the key issues for the modernization of traditional Chinese medicine. Having introduced the monoclonal antibody technology into the study on pharmacodynamic material basis of traditional Chinese medicines, the author prepared the immunoaffinity chromatography column by using monoclonal antibodies in active components of traditional Chinese medicines, so as to selectively knock out the component from herbs or traditional Chinese medicine compounds, while preserving all of the other components and keeping their amount and ratio unchanged. A comparative study on pharmacokinetics and pharmacodynamics was made to explicitly reveal the correlation between the component and the main purpose of traditional Chinese medicines and compounds. The analysis on pharmacodynamic material basis of traditional Chinese medicines by using specific knockout technology with monoclonal antibodies is a new method for study pharmacodynamic material basis in line with the characteristics of traditional Chinese medicines. Its results can not only help study material basis from a new perspective, but also help find the modern scientific significance in single herb or among compounds of traditional Chinese medicines.
Bassolé, Imaël Henri Nestor; Lamien-Meda, Aline; Bayala, Balé; Tirogo, Souleymane; Franz, Chlodwig; Novak, Johannes; Nebié, Roger Charles; Dicko, Mamoudou Hama
2010-11-03
Essential oils from leaves of Lippia multiflora, Mentha x piperita and Ocimum basilicum from Burkina Faso were analysed by GC-FID and GC-MS. Major components were p-cymene, thymol, b-caryophyllene, carvacrol and carvone for L. multiflora, menthol and iso-menthone for M. x piperita and, linalool and eugenol for O. basilicum. The essential oils and their major monoterpene alcohols were tested against nine bacterial strains using the disc diffusion and broth microdilution methods. The essential oils with high phenolic contents were the most effective antimicrobials. The checkerboard method was used to quantify the efficacy of paired combinations of essential oils and their major components. The best synergetic effects among essential oils and major components were obtained with combinations involving O. basilicum essential oil and eugenol, respectively. As phenolic components are characterized by a strong spicy aroma, this study suggests that the selection of certain combinations of EOs could help to reduce the amount of essential oils and consequently reduce any adverse sensory impact in food.
Escudero, Javier; Hornero, Roberto; Abásolo, Daniel; Fernández, Alberto; Poza, Jesús
2007-01-01
The aim of this study was to improve the diagnosis of Alzheimer's disease (AD) patients applying a blind source separation (BSS) and component selection procedure to their magnetoencephalogram (MEG) recordings. MEGs from 18 AD patients and 18 control subjects were decomposed with the algorithm for multiple unknown signals extraction. MEG channels and components were characterized by their mean frequency, spectral entropy, approximate entropy, and Lempel-Ziv complexity. Using Student's t-test, the components which accounted for the most significant differences between groups were selected. Then, these relevant components were used to partially reconstruct the MEG channels. By means of a linear discriminant analysis, we found that the BSS-preprocessed MEGs classified the subjects with an accuracy of 80.6%, whereas 72.2% accuracy was obtained without the BSS and component selection procedure.
A Visual Decision Aid for Gear Materials Selection
NASA Astrophysics Data System (ADS)
Maity, S. R.; Chakraborty, S.
2013-10-01
Materials play an important role during the entire design process and the designers need to identify materials with specific functionalities in order to find out feasible design concepts. While selecting materials for engineering designs from an ever-increasing array of alternatives, with each having its own characteristics, applications, advantages and limitations, a clear understanding of the functional requirements for each individual component is required and various important criteria need to be considered. Although various approaches have already been adopted by the past researchers to solve the material selection problems, they all require profound knowledge in mathematics from the part of the designers for their implementation. This paper proposes the application of an integrated preference ranking organization method for enrichment evaluation and geometrical analysis for interactive aid method as a visual decision aid for material selection. Two real time gear material selection problems are solved which prove the potentiality and usefulness of this combined approach. It is observed that Nitralloy 135M and Nylon glass fiber reinforced 6/6 are respectively the choicest metallic and non-metallic gear materials.
ARL Summer Student Research Symposium. Volume 1: Select Papers
2012-08-01
deploying Android smart phones and tablets on the battlefield, which may be a target for malware. In our research, we attempt to improve static...network. (a) The T1 and MRI images are (b) segmented into different material components. The segmented geometry is then used to create (c) a finite element...towards finding a method to detect mTBI non-invasively. One method in particular includes the use of a magnetic resonance image ( MRI )-based imaging
Identification of Coffee Varieties Using Laser-Induced Breakdown Spectroscopy and Chemometrics.
Zhang, Chu; Shen, Tingting; Liu, Fei; He, Yong
2017-12-31
We linked coffee quality to its different varieties. This is of interest because the identification of coffee varieties should help coffee trading and consumption. Laser-induced breakdown spectroscopy (LIBS) combined with chemometric methods was used to identify coffee varieties. Wavelet transform (WT) was used to reduce LIBS spectra noise. Partial least squares-discriminant analysis (PLS-DA), radial basis function neural network (RBFNN), and support vector machine (SVM) were used to build classification models. Loadings of principal component analysis (PCA) were used to select the spectral variables contributing most to the identification of coffee varieties. Twenty wavelength variables corresponding to C I, Mg I, Mg II, Al II, CN, H, Ca II, Fe I, K I, Na I, N I, and O I were selected. PLS-DA, RBFNN, and SVM models on selected wavelength variables showed acceptable results. SVM and RBFNN models performed better with a classification accuracy of over 80% in the prediction set, for both full spectra and the selected variables. The overall results indicated that it was feasible to use LIBS and chemometric methods to identify coffee varieties. For further studies, more samples are needed to produce robust classification models, research should be conducted on which methods to use to select spectral peaks that correspond to the elements contributing most to identification, and the methods for acquiring stable spectra should also be studied.
Identification of Coffee Varieties Using Laser-Induced Breakdown Spectroscopy and Chemometrics
Zhang, Chu; Shen, Tingting
2017-01-01
We linked coffee quality to its different varieties. This is of interest because the identification of coffee varieties should help coffee trading and consumption. Laser-induced breakdown spectroscopy (LIBS) combined with chemometric methods was used to identify coffee varieties. Wavelet transform (WT) was used to reduce LIBS spectra noise. Partial least squares-discriminant analysis (PLS-DA), radial basis function neural network (RBFNN), and support vector machine (SVM) were used to build classification models. Loadings of principal component analysis (PCA) were used to select the spectral variables contributing most to the identification of coffee varieties. Twenty wavelength variables corresponding to C I, Mg I, Mg II, Al II, CN, H, Ca II, Fe I, K I, Na I, N I, and O I were selected. PLS-DA, RBFNN, and SVM models on selected wavelength variables showed acceptable results. SVM and RBFNN models performed better with a classification accuracy of over 80% in the prediction set, for both full spectra and the selected variables. The overall results indicated that it was feasible to use LIBS and chemometric methods to identify coffee varieties. For further studies, more samples are needed to produce robust classification models, research should be conducted on which methods to use to select spectral peaks that correspond to the elements contributing most to identification, and the methods for acquiring stable spectra should also be studied. PMID:29301228
Evaluation of two selection tests for recruitment into radiology specialty training.
Patterson, Fiona; Knight, Alec; McKnight, Liam; Booth, Thomas C
2016-07-11
This study evaluated whether two selection tests previously validated for primary care General Practice (GP) trainee selection could provide a valid shortlisting selection method for entry into specialty training for the secondary care specialty of radiology. We conducted a retrospective analysis of data from radiology applicants who also applied to UK GP specialty training or Core Medical Training. The psychometric properties of the two selection tests, a clinical problem solving (CPS) test and situational judgement test (SJT), were analysed to evaluate their reliability. Predictive validity of the tests was analysed by comparing them with the current radiology selection assessments, and the licensure examination results taken after the first stage of training (Fellowship of the Royal College of Radiologists (FRCR) Part 1). The internal reliability of the two selection tests in the radiology applicant sample was good (α ≥ 0.80). The average correlation with radiology shortlisting selection scores was r = 0.26 for the CPS (with p < 0.05 in 5 of 11 shortlisting centres), r = 0.15 for the SJT (with p < 0.05 in 2 of 11 shortlisting centres) and r = 0.25 (with p < 0.05 in 5 of 11 shortlisting centres) for the two tests combined. The CPS test scores significantly correlated with performance in both components of the FRCR Part 1 examinations (r = 0.5 anatomy; r = 0.4 physics; p < 0.05 for both). The SJT did not correlate with either component of the examination. The current CPS test may be an appropriate selection method for shortlisting in radiology but would benefit from further refinement for use in radiology to ensure that the test specification is relevant. The evidence on whether the SJT may be appropriate for shortlisting in radiology is limited. However, these results may be expected to some extent since the SJT is designed to measure non-academic attributes. Further validation work (e.g. with non-academic outcome variables) is required to evaluate whether an SJT will add value in recruitment for radiology specialty training and will further inform construct validity of SJTs as a selection methodology.
Yu, Fangjun; Qian, Hao; Zhang, Jiayu; Sun, Jie; Ma, Zhiguo
2018-04-01
We aim to determine the chemical constituents of Yinchen extract and Yinchen herbs using high-performance liquid chromatography coupled with diode array detection and high-resolution mass spectrometry. The method was developed to analyze of eight organic acid components of Yinchen extract (including neochlorogenic acid, chlorogenic acid, cryptochlorogenic acid, caffeic acid, 1,3-dicaffeoylquinic acid, 3,4-dicaffeoylquinic acid, 3,5-dicaffeoylquinic acid and 4,5-dicaffeoylquinic acid). The separation was conducted using an Agilent TC-C18 column with acetonitrile - 0.2% formic acid solution as the mobile phases under gradient elution. The analytical method was fully validated in terms of linearity, sensitivity, precision, repeatability as well as recovery, and subsequently the method was performed for the quantitative assessment of Yinchen extracts and Yinchen herbs. In addition, the changes of selected markers were studied when Yinchen herbs decocting in water and isomerization occurred between the chlorogenic acids. The proposed method enables both qualitative and quantitative analyses and could be developed as a new tool for the quality evaluation of Yinchen extract and Yinchen herbs. The changes of selected markers in water decoction process could give us some novel idea when studying the link between substances and drug efficacy. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Dobner, Sven; Fallnich, Carsten
2014-02-01
We present the hyperspectral imaging capabilities of in-line interferometric femtosecond stimulated Raman scattering. The beneficial features of this method, namely, the improved signal-to-background ratio compared to other applicable broadband stimulated Raman scattering methods and the simple experimental implementation, allow for a rather fast acquisition of three-dimensional raster-scanned hyperspectral data-sets, which is shown for PMMA beads and a lipid droplet in water as a demonstration. A subsequent application of a principle component analysis displays the chemical selectivity of the method.
Ross, Matthew S; Pereira, Alberto dos Santos; Fennell, Jon; Davies, Martin; Johnson, James; Sliva, Lucie; Martin, Jonathan W
2012-12-04
The Canadian oil sands industry stores toxic oil sands process-affected water (OSPW) in large tailings ponds adjacent to the Athabasca River or its tributaries, raising concerns over potential seepage. Naphthenic acids (NAs; C(n)H(2n-Z)O(2)) are toxic components of OSPW, but are also natural components of bitumen and regional groundwaters, and may enter surface waters through anthropogenic or natural sources. This study used a selective high-resolution mass spectrometry method to examine total NA concentrations and NA profiles in OSPW (n = 2), Athabasca River pore water (n = 6, representing groundwater contributions) and surface waters (n = 58) from the Lower Athabasca Region. NA concentrations in surface water (< 2-80.8 μg/L) were 100-fold lower than previously estimated. Principal components analysis (PCA) distinguished sample types based on NA profile, and correlations to water quality variables identified two sources of NAs: natural fatty acids, and bitumen-derived NAs. Analysis of NA data with water quality variables highlighted two tributaries to the Athabasca River-Beaver River and McLean Creek-as possibly receiving OSPW seepage. This study is the first comprehensive analysis of NA profiles in surface waters of the region, and demonstrates the need for highly selective analytical methods for source identification and in monitoring for potential effects of development on ambient water quality.
NASA Astrophysics Data System (ADS)
Hadad, Ghada M.; El-Gindy, Alaa; Mahmoud, Waleed M. M.
2008-08-01
High-performance liquid chromatography (HPLC) and multivariate spectrophotometric methods are described for the simultaneous determination of ambroxol hydrochloride (AM) and doxycycline (DX) in combined pharmaceutical capsules. The chromatographic separation was achieved on reversed-phase C 18 analytical column with a mobile phase consisting of a mixture of 20 mM potassium dihydrogen phosphate, pH 6-acetonitrile in ratio of (1:1, v/v) and UV detection at 245 nm. Also, the resolution has been accomplished by using numerical spectrophotometric methods as classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS-1) applied to the UV spectra of the mixture and graphical spectrophotometric method as first derivative of the ratio spectra ( 1DD) method. Analytical figures of merit (FOM), such as sensitivity, selectivity, analytical sensitivity, limit of quantitation and limit of detection were determined for CLS, PLS-1 and PCR methods. The proposed methods were validated and successfully applied for the analysis of pharmaceutical formulation and laboratory-prepared mixtures containing the two component combination.
Hadad, Ghada M; El-Gindy, Alaa; Mahmoud, Waleed M M
2008-08-01
High-performance liquid chromatography (HPLC) and multivariate spectrophotometric methods are described for the simultaneous determination of ambroxol hydrochloride (AM) and doxycycline (DX) in combined pharmaceutical capsules. The chromatographic separation was achieved on reversed-phase C(18) analytical column with a mobile phase consisting of a mixture of 20mM potassium dihydrogen phosphate, pH 6-acetonitrile in ratio of (1:1, v/v) and UV detection at 245 nm. Also, the resolution has been accomplished by using numerical spectrophotometric methods as classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS-1) applied to the UV spectra of the mixture and graphical spectrophotometric method as first derivative of the ratio spectra ((1)DD) method. Analytical figures of merit (FOM), such as sensitivity, selectivity, analytical sensitivity, limit of quantitation and limit of detection were determined for CLS, PLS-1 and PCR methods. The proposed methods were validated and successfully applied for the analysis of pharmaceutical formulation and laboratory-prepared mixtures containing the two component combination.
A new time-frequency method for identification and classification of ball bearing faults
NASA Astrophysics Data System (ADS)
Attoui, Issam; Fergani, Nadir; Boutasseta, Nadir; Oudjani, Brahim; Deliou, Adel
2017-06-01
In order to fault diagnosis of ball bearing that is one of the most critical components of rotating machinery, this paper presents a time-frequency procedure incorporating a new feature extraction step that combines the classical wavelet packet decomposition energy distribution technique and a new feature extraction technique based on the selection of the most impulsive frequency bands. In the proposed procedure, firstly, as a pre-processing step, the most impulsive frequency bands are selected at different bearing conditions using a combination between Fast-Fourier-Transform FFT and Short-Frequency Energy SFE algorithms. Secondly, once the most impulsive frequency bands are selected, the measured machinery vibration signals are decomposed into different frequency sub-bands by using discrete Wavelet Packet Decomposition WPD technique to maximize the detection of their frequency contents and subsequently the most useful sub-bands are represented in the time-frequency domain by using Short Time Fourier transform STFT algorithm for knowing exactly what the frequency components presented in those frequency sub-bands are. Once the proposed feature vector is obtained, three feature dimensionality reduction techniques are employed using Linear Discriminant Analysis LDA, a feedback wrapper method and Locality Sensitive Discriminant Analysis LSDA. Lastly, the Adaptive Neuro-Fuzzy Inference System ANFIS algorithm is used for instantaneous identification and classification of bearing faults. In order to evaluate the performances of the proposed method, different testing data set to the trained ANFIS model by using different conditions of healthy and faulty bearings under various load levels, fault severities and rotating speed. The conclusion resulting from this paper is highlighted by experimental results which prove that the proposed method can serve as an intelligent bearing fault diagnosis system.
Kamitani, Toshiaki; Kuroiwa, Yoshiyuki
2009-01-01
Recent studies demonstrated an altered P3 component and prolonged reaction time during the visual discrimination tasks in multiple system atrophy (MSA). In MSA, however, little is known about the N2 component which is known to be closely related to the visual discrimination process. We therefore compared the N2 component as well as the N1 and P3 components in 17 MSA patients with these components in 10 normal controls, by using a visual selective attention task to color or to shape. While the P3 in MSA was significantly delayed in selective attention to shape, the N2 in MSA was significantly delayed in selective attention to color. N1 was normally preserved both in attention to color and in attention to shape. Our electrophysiological results indicate that the color discrimination process during selective attention is impaired in MSA.
Ecosystem-based fisheries management requires a change to the selective fishing philosophy
Zhou, Shijie; Smith, Anthony D. M.; Punt, André E.; Richardson, Anthony J.; Gibbs, Mark; Fulton, Elizabeth A.; Pascoe, Sean; Bulman, Catherine; Bayliss, Peter; Sainsbury, Keith
2010-01-01
Globally, many fish species are overexploited, and many stocks have collapsed. This crisis, along with increasing concerns over flow-on effects on ecosystems, has caused a reevaluation of traditional fisheries management practices, and a new ecosystem-based fisheries management (EBFM) paradigm has emerged. As part of this approach, selective fishing is widely encouraged in the belief that nonselective fishing has many adverse impacts. In particular, incidental bycatch is seen as wasteful and a negative feature of fishing, and methods to reduce bycatch are implemented in many fisheries. However, recent advances in fishery science and ecology suggest that a selective approach may also result in undesirable impacts both to fisheries and marine ecosystems. Selective fishing applies one or more of the “6-S” selections: species, stock, size, sex, season, and space. However, selective fishing alters biodiversity, which in turn changes ecosystem functioning and may affect fisheries production, hindering rather than helping achieve the goals of EBFM. We argue here that a “balanced exploitation” approach might alleviate many of the ecological effects of fishing by avoiding intensive removal of particular components of the ecosystem, while still supporting sustainable fisheries. This concept may require reducing exploitation rates on certain target species or groups to protect vulnerable components of the ecosystem. Benefits to society could be maintained or even increased because a greater proportion of the entire suite of harvested species is used. PMID:20435916
Ecosystem-based fisheries management requires a change to the selective fishing philosophy.
Zhou, Shijie; Smith, Anthony D M; Punt, André E; Richardson, Anthony J; Gibbs, Mark; Fulton, Elizabeth A; Pascoe, Sean; Bulman, Catherine; Bayliss, Peter; Sainsbury, Keith
2010-05-25
Globally, many fish species are overexploited, and many stocks have collapsed. This crisis, along with increasing concerns over flow-on effects on ecosystems, has caused a reevaluation of traditional fisheries management practices, and a new ecosystem-based fisheries management (EBFM) paradigm has emerged. As part of this approach, selective fishing is widely encouraged in the belief that nonselective fishing has many adverse impacts. In particular, incidental bycatch is seen as wasteful and a negative feature of fishing, and methods to reduce bycatch are implemented in many fisheries. However, recent advances in fishery science and ecology suggest that a selective approach may also result in undesirable impacts both to fisheries and marine ecosystems. Selective fishing applies one or more of the "6-S" selections: species, stock, size, sex, season, and space. However, selective fishing alters biodiversity, which in turn changes ecosystem functioning and may affect fisheries production, hindering rather than helping achieve the goals of EBFM. We argue here that a "balanced exploitation" approach might alleviate many of the ecological effects of fishing by avoiding intensive removal of particular components of the ecosystem, while still supporting sustainable fisheries. This concept may require reducing exploitation rates on certain target species or groups to protect vulnerable components of the ecosystem. Benefits to society could be maintained or even increased because a greater proportion of the entire suite of harvested species is used.
Method for detection of selected chemicals in an open environment
NASA Technical Reports Server (NTRS)
Duong, Tuan (Inventor); Ryan, Margaret (Inventor)
2009-01-01
The present invention relates to a space-invariant independent component analysis and electronic nose for detection of selective chemicals in an unknown environment, and more specifically, an approach to analysis of sensor responses to mixtures of unknown chemicals by an electronic nose in an open and changing environment. It is intended to fill the gap between an alarm, which has little or no ability to distinguish among chemical compounds causing a response, and an analytical instrument, which can distinguish all compounds present but with no real-time or continuous event monitoring ability.
Benzylic Fluorination of Aza-Heterocycles Induced by Single-Electron Transfer to Selectfluor.
Danahy, Kelley E; Cooper, Julian C; Van Humbeck, Jeffrey F
2018-04-23
A selective and mild method for the benzylic fluorination of aromatic azaheterocycles with Selectfluor is described. These reactions take place by a previously unreported mechanism, in which electron transfer from the heterocyclic substrate to the electrophilic fluorinating agent Selectfluor eventually yields a benzylic radical, thus leading to the desired C-F bond formation. This mechanism enables high intra- and intermolecular selectivity for aza-heterocycles over other benzylic components with similar C-H bond-dissociation energies. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Chavez, P.S.
1988-01-01
Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin
2018-03-01
A time-frequency analysis method based on ensemble local mean decomposition (ELMD) and fast kurtogram (FK) is proposed for rotating machinery fault diagnosis. Local mean decomposition (LMD), as an adaptive non-stationary and nonlinear signal processing method, provides the capability to decompose multicomponent modulation signal into a series of demodulated mono-components. However, the occurring mode mixing is a serious drawback. To alleviate this, ELMD based on noise-assisted method was developed. Still, the existing environmental noise in the raw signal remains in corresponding PF with the component of interest. FK has good performance in impulse detection while strong environmental noise exists. But it is susceptible to non-Gaussian noise. The proposed method combines the merits of ELMD and FK to detect the fault for rotating machinery. Primarily, by applying ELMD the raw signal is decomposed into a set of product functions (PFs). Then, the PF which mostly characterizes fault information is selected according to kurtosis index. Finally, the selected PF signal is further filtered by an optimal band-pass filter based on FK to extract impulse signal. Fault identification can be deduced by the appearance of fault characteristic frequencies in the squared envelope spectrum of the filtered signal. The advantages of ELMD over LMD and EEMD are illustrated in the simulation analyses. Furthermore, the efficiency of the proposed method in fault diagnosis for rotating machinery is demonstrated on gearbox case and rolling bearing case analyses.
Fan, Bingfei; Li, Qingguo; Liu, Tao
2017-12-28
With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method.
Chavez, P.S.; Kwarteng, A.Y.
1989-01-01
A challenge encountered with Landsat Thematic Mapper (TM) data, which includes data from size reflective spectral bands, is displaying as much information as possible in a three-image set for color compositing or digital analysis. Principal component analysis (PCA) applied to the six TM bands simultaneously is often used to address this problem. However, two problems that can be encountered using the PCA method are that information of interest might be mathematically mapped to one of the unused components and that a color composite can be difficult to interpret. "Selective' PCA can be used to minimize both of these problems. The spectral contrast among several spectral regions was mapped for a northern Arizona site using Landsat TM data. Field investigations determined that most of the spectral contrast seen in this area was due to one of the following: the amount of iron and hematite in the soils and rocks, vegetation differences, standing and running water, or the presence of gypsum, which has a higher moisture retention capability than do the surrounding soils and rocks. -from Authors
Shen, Chung-Wei; Chen, Yi-Hau
2018-03-13
We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.
Curry, Wayne; Conway, Samuel; Goodfield, Clara; Miller, Kimberly; Mueller, Ronald L; Polini, Eugene
2010-12-01
The preparation of sterile parenteral products requires careful control of all ingredients, materials, and processes to ensure the final product has the identity and strength, and meets the quality and purity characteristics that it purports to possess. Contamination affecting these critical properties of parenteral products can occur in many ways and from many sources. The use of closures supplied by manufacturers in a ready-to-use state can be an effective method for reducing the risk of contamination and improving the quality of the drug product. This article will address contamination attributable to elastomeric container closure components and the regulatory requirements associated with container closure systems. Possible contaminants, including microorganisms, endotoxins, and chemicals, along with the methods by which these contaminants can enter the product will be reviewed. Such methods include inappropriate material selection, improper closure preparation processes, compromised container closure integrity, degradation of closures, and leaching of compounds from the closures.
NASA Technical Reports Server (NTRS)
Riedell, James A. (Inventor); Easler, Timothy E. (Inventor)
2009-01-01
A precursor of a ceramic adhesive suitable for use in a vacuum, thermal, and microgravity environment. The precursor of the ceramic adhesive includes a silicon-based, preceramic polymer and at least one ceramic powder selected from the group consisting of aluminum oxide, aluminum nitride, boron carbide, boron oxide, boron nitride, hafnium boride, hafnium carbide, hafnium oxide, lithium aluminate, molybdenum silicide, niobium carbide, niobium nitride, silicon boride, silicon carbide, silicon oxide, silicon nitride, tin oxide, tantalum boride, tantalum carbide, tantalum oxide, tantalum nitride, titanium boride, titanium carbide, titanium oxide, titanium nitride, yttrium oxide, zirconium diboride, zirconium carbide, zirconium oxide, and zirconium silicate. Methods of forming the ceramic adhesive and of repairing a substrate in a vacuum and microgravity environment are also disclosed, as is a substrate repaired with the ceramic adhesive.
Cooling method with automated seasonal freeze protection
Cambell, Levi; Chu, Richard; David, Milnes; Ellsworth, Jr, Michael; Iyengar, Madhusudan; Simons, Robert; Singh, Prabjit; Zhang, Jing
2016-05-31
An automated multi-fluid cooling method is provided for cooling an electronic component(s). The method includes obtaining a coolant loop, and providing a coolant tank, multiple valves, and a controller. The coolant loop is at least partially exposed to outdoor ambient air temperature(s) during normal operation, and the coolant tank includes first and second reservoirs containing first and second fluids, respectively. The first fluid freezes at a lower temperature than the second, the second fluid has superior cooling properties compared with the first, and the two fluids are soluble. The multiple valves are controllable to selectively couple the first or second fluid into the coolant in the coolant loop, wherein the coolant includes at least the second fluid. The controller automatically controls the valves to vary first fluid concentration level in the coolant loop based on historical, current, or anticipated outdoor air ambient temperature(s) for a time of year.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-26
... controls on trading; information and data relating to the index, including the design, computation and... futures contract raises novel or complex issues that require additional time for review, or if the foreign... composition, computation, or method of selection of component entities of an index referenced and defined in...
Genetic evaluation of rapid height growth in pot- and nursery-grown Scotch pine
Maurice E., Jr. Demeritt; Henry D. Gerhold; Henry D. Gerhold
1985-01-01
Genetic and environmental components of variance for 2-year pot and nursery heights of offspring from inter- and intraprovenance matings in Scotch pine were studied to determine which provenances and selection methods should be used in an ornamental and Christmas tree improvement program. Nursery evaluation was preferred to pot evaluation because heritability estimates...
Array-Based Discovery of Aptamer Pairs
2014-12-11
affinities greatly exceeding either monovalent component. DNA aptamers are especially well-suited for such constructs, because they can be linked via...standard synthesis techniques without requiring chemical conjugation. Unfortunately, aptamer pairs are difficult to generate, primarily because...conventional selection methods preferentially yield aptamers that recognize a dominant “hot spot” epitope. Our 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND
ERIC Educational Resources Information Center
Allen, Charlie Joe
Using techniques of the reputational method to study community power structure, this report identifies components of power structure in a Tennessee school district, demonstrates that proven methodologies can facilitate educational leaders' reform efforts, and serves as a pilot study for further investigation. Researchers investigated the district…
1983-06-16
has been advocated by Gnanadesikan and ilk (1969), and others in the literature. This suggests that, if we use the formal signficance test type...American Statistical Asso., 62, 1159-1178. Gnanadesikan , R., and Wilk, M..B. (1969). Data Analytic Methods in Multi- variate Statistical Analysis. In
ERIC Educational Resources Information Center
Stoltzfus, Lorna
Described is a one-hour overview of the unit processes which comprise a municipal wastewater treatment system. Topics covered in this instructor's guide include types of pollutants encountered, treatment methods, and procedures by which wastewater treatment processes are selected. A slide-tape program is available to supplement this component of…
ERIC Educational Resources Information Center
Lekes, Natasha; Bragg, Debra. D.; Loeb, Jane W.; Oleksiw, Catherine A.; Marszalek, Jacob; Brooks-LaRaviere, Margaret; Zhu, Rongchun; Kremidas, Chloe C.; Akukwe, Grace; Lee, Hyeong-Jong; Hood, Lisa K.
2007-01-01
This mixed method study examined secondary student matriculation to two selected community colleges offering career and technical education (CTE) transition programs through partnerships with K-12 and secondary districts having numerous high schools. The study had two distinct components: (1) a secondary study that compared CTE and non-CTE…
Reliability Assessment for COTS Components in Space Flight Applications
NASA Technical Reports Server (NTRS)
Krishnan, G. S.; Mazzuchi, Thomas A.
2001-01-01
Systems built for space flight applications usually demand very high degree of performance and a very high level of accuracy. Hence, the design engineers are often prone to selecting state-of-art technologies for inclusion in their system design. The shrinking budgets also necessitate use of COTS (Commercial Off-The-Shelf) components, which are construed as being less expensive. The performance and accuracy requirements for space flight applications are much more stringent than those for the commercial applications. The quantity of systems designed and developed for space applications are much lower in number than those produced for the commercial applications. With a given set of requirements, are these COTS components reliable? This paper presents a model for assessing the reliability of COTS components in space applications and the associated affect on the system reliability. We illustrate the method with a real application.
NASA Astrophysics Data System (ADS)
Yang, Yang; Li, Xiukun
2016-06-01
Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.
Exploration of complex visual feature spaces for object perception
Leeds, Daniel D.; Pyles, John A.; Tarr, Michael J.
2014-01-01
The mid- and high-level visual properties supporting object perception in the ventral visual pathway are poorly understood. In the absence of well-specified theory, many groups have adopted a data-driven approach in which they progressively interrogate neural units to establish each unit's selectivity. Such methods are challenging in that they require search through a wide space of feature models and stimuli using a limited number of samples. To more rapidly identify higher-level features underlying human cortical object perception, we implemented a novel functional magnetic resonance imaging method in which visual stimuli are selected in real-time based on BOLD responses to recently shown stimuli. This work was inspired by earlier primate physiology work, in which neural selectivity for mid-level features in IT was characterized using a simple parametric approach (Hung et al., 2012). To extend such work to human neuroimaging, we used natural and synthetic object stimuli embedded in feature spaces constructed on the basis of the complex visual properties of the objects themselves. During fMRI scanning, we employed a real-time search method to control continuous stimulus selection within each image space. This search was designed to maximize neural responses across a pre-determined 1 cm3 brain region within ventral cortex. To assess the value of this method for understanding object encoding, we examined both the behavior of the method itself and the complex visual properties the method identified as reliably activating selected brain regions. We observed: (1) Regions selective for both holistic and component object features and for a variety of surface properties; (2) Object stimulus pairs near one another in feature space that produce responses at the opposite extremes of the measured activity range. Together, these results suggest that real-time fMRI methods may yield more widely informative measures of selectivity within the broad classes of visual features associated with cortical object representation. PMID:25309408
46 CFR 56.10-1 - Selection and limitations of piping components (replaces 105 through 108).
Code of Federal Regulations, 2010 CFR
2010-10-01
... (CONTINUED) MARINE ENGINEERING PIPING SYSTEMS AND APPURTENANCES Components § 56.10-1 Selection and... piping system components, shall meet material and standard requirements of subpart 56.60 and shall meet...
Acoustic guide for noise-transmission testing of aircraft
NASA Technical Reports Server (NTRS)
Vaicaitis, Rimas (Inventor)
1987-01-01
Selective testing of aircraft or other vehicular components without requiring disassembly of the vehicle or components was accomplished by using a portable guide apparatus. The device consists of a broadband noise source, a guide to direct the acoustic energy, soft sealing insulation to seal the guide to the noise source and to the vehicle component, and noise measurement microphones, both outside the vehicle at the acoustic guide output and inside the vehicle to receive attenuated sound. By directing acoustic energy only to selected components of a vehicle via the acoustic guide, it is possible to test a specific component, such as a door or window, without picking up extraneous noise which may be transmitted to the vehicle interior through other components or structure. This effect is achieved because no acoustic energy strikes the vehicle exterior except at the selected component. Also, since the test component remains attached to the vehicle, component dynamics with vehicle frame are not altered.
Genome-wide selection components analysis in a fish with male pregnancy.
Flanagan, Sarah P; Jones, Adam G
2017-04-01
A major goal of evolutionary biology is to identify the genome-level targets of natural and sexual selection. With the advent of next-generation sequencing, whole-genome selection components analysis provides a promising avenue in the search for loci affected by selection in nature. Here, we implement a genome-wide selection components analysis in the sex role reversed Gulf pipefish, Syngnathus scovelli. Our approach involves a double-digest restriction-site associated DNA sequencing (ddRAD-seq) technique, applied to adult females, nonpregnant males, pregnant males, and their offspring. An F ST comparison of allele frequencies among these groups reveals 47 genomic regions putatively experiencing sexual selection, as well as 468 regions showing a signature of differential viability selection between males and females. A complementary likelihood ratio test identifies similar patterns in the data as the F ST analysis. Sexual selection and viability selection both tend to favor the rare alleles in the population. Ultimately, we conclude that genome-wide selection components analysis can be a useful tool to complement other approaches in the effort to pinpoint genome-level targets of selection in the wild. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Díaz-Ayil, G.; Amouroux, M.; Blondel, W. C. P. M.; Bourg-Heckly, G.; Leroux, A.; Guillemin, F.; Granjon, Y.
2009-07-01
This paper deals with the development and application of in vivo spatially-resolved bimodal spectroscopy (AutoFluorescence AF and Diffuse Reflectance DR), to discriminate various stages of skin precancer in a preclinical model (UV-irradiated mouse): Compensatory Hyperplasia CH, Atypical Hyperplasia AH and Dysplasia D. A programmable instrumentation was developed for acquiring AF emission spectra using 7 excitation wavelengths: 360, 368, 390, 400, 410, 420 and 430 nm, and DR spectra in the 390-720 nm wavelength range. After various steps of intensity spectra preprocessing (filtering, spectral correction and intensity normalization), several sets of spectral characteristics were extracted and selected based on their discrimination power statistically tested for every pair-wise comparison of histological classes. Data reduction with Principal Components Analysis (PCA) was performed and 3 classification methods were implemented (k-NN, LDA and SVM), in order to compare diagnostic performance of each method. Diagnostic performance was studied and assessed in terms of sensitivity (Se) and specificity (Sp) as a function of the selected features, of the combinations of 3 different inter-fibers distances and of the numbers of principal components, such that: Se and Sp ≈ 100% when discriminating CH vs. others; Sp ≈ 100% and Se > 95% when discriminating Healthy vs. AH or D; Sp ≈ 74% and Se ≈ 63%for AH vs. D.
Dunn, Abe; Liebman, Eli; Rittmueller, Lindsey; Shapiro, Adam Hale
2017-04-01
To provide guidelines to researchers measuring health expenditures by disease and compare these methodologies' implied inflation estimates. A convenience sample of commercially insured individuals over the 2003 to 2007 period from Truven Health. Population weights are applied, based on age, sex, and region, to make the sample of over 4 million enrollees representative of the entire commercially insured population. Different methods are used to allocate medical-care expenditures to distinct condition categories. We compare the estimates of disease-price inflation by method. Across a variety of methods, the compound annual growth rate stays within the range 3.1 to 3.9 percentage points. Disease-specific inflation measures are more sensitive to the selected methodology. The selected allocation method impacts aggregate inflation rates, but considering the variety of methods applied, the differences appear small. Future research is necessary to better understand these differences in other population samples and to connect disease expenditures to measures of quality. © Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Nikolić, G. S.; Žerajić, S.; Cakić, M.
2011-10-01
Multivariate calibration method is a powerful mathematical tool that can be applied in analytical chemistry when the analytical signals are highly overlapped. The method with regression by partial least squares is proposed for the simultaneous spectrophotometric determination of adrenergic vasoconstrictors in decongestive solution containing two active components: phenyleprine hydrochloride and trimazoline hydrochloride. These sympathomimetic agents are that frequently associated in pharmaceutical formulations against the common cold. The proposed method, which is, simple and rapid, offers the advantages of sensitivity and wide range of determinations without the need for extraction of the vasoconstrictors. In order to minimize the optimal factors necessary to obtain the calibration matrix by multivariate calibration, different parameters were evaluated. The adequate selection of the spectral regions proved to be important on the number of factors. In order to simultaneously quantify both hydrochlorides among excipients, the spectral region between 250 and 290 nm was selected. A recovery for the vasoconstrictor was 98-101%. The developed method was applied to assay of two decongestive pharmaceutical preparations.
Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework
NASA Astrophysics Data System (ADS)
Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy
2014-09-01
We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).
Korany, Mohamed A; Abdine, Heba H; Ragab, Marwa A A; Aboras, Sara I
2015-05-15
This paper discusses a general method for the use of orthogonal polynomials for unequal intervals (OPUI) to eliminate interferences in two-component spectrophotometric analysis. In this paper, a new approach was developed by using first derivative D1 curve instead of absorbance curve to be convoluted using OPUI method for the determination of metronidazole (MTR) and nystatin (NYS) in their mixture. After applying derivative treatment of the absorption data many maxima and minima points appeared giving characteristic shape for each drug allowing the selection of different number of points for the OPUI method for each drug. This allows the specific and selective determination of each drug in presence of the other and in presence of any matrix interference. The method is particularly useful when the two absorption spectra have considerable overlap. The results obtained are encouraging and suggest that the method can be widely applied to similar problems. Copyright © 2015 Elsevier B.V. All rights reserved.
Sandia Corporation (Albuquerque, NM)
Ewsuk, Kevin G [Albuquerque, NM; Arguello, Jr., Jose G.
2006-01-31
A method of designing a primary geometry, such as for a forming die, to be used in a powder pressing application by using a combination of axisymmetric geometric shapes, transition radii, and transition spaces to simulate the geometry where the shapes can be selected from a predetermined list or menu of axisymmetric shapes and then developing a finite element mesh to represent the geometry. This mesh, along with material properties of the component to be designed and powder, is input to a standard deformation finite element code to evaluate the deformation characteristics of the component being designed. The user can develop the geometry interactively with a computer interface in minutes and execute a complete analysis of the deformation characteristics of the simulated component geometry.
Method for fabricating laminated uranium composites
Chapman, L.R.
1983-08-03
The present invention is directed to a process for fabricating laminated composites of uranium or uranium alloys and at least one other metal or alloy. The laminated composites are fabricated by forming a casting of the molten uranium with the other metal or alloy which is selectively positioned in the casting and then hot-rolling the casting into a laminated plate in or around which the casting components are metallurgically bonded to one another to form the composite. The process of the present invention provides strong metallurgical bonds between the laminate components primarily since the bond disrupting surface oxides on the uranium or uranium alloy float to the surface of the casting to effectively remove the oxides from the bonding surfaces of the components.
Ordered transport and identification of particles
Shera, E.B.
1993-05-11
A method and apparatus are provided for application of electrical field gradients to induce particle velocities to enable particle sequence and identification information to be obtained. Particle sequence is maintained by providing electroosmotic flow for an electrolytic solution in a particle transport tube. The transport tube and electrolytic solution are selected to provide an electroosmotic radius of >100 so that a plug flow profile is obtained for the electrolytic solution in the transport tube. Thus, particles are maintained in the same order in which they are introduced in the transport tube. When the particles also have known electrophoretic velocities, the field gradients introduce an electrophoretic velocity component onto the electroosmotic velocity. The time that the particles pass selected locations along the transport tube may then be detected and the electrophoretic velocity component calculated for particle identification. One particular application is the ordered transport and identification of labeled nucleotides sequentially cleaved from a strand of DNA.
Apparatus for measurements of thermal and optical stimulated exo-electron emission and luminescence
NASA Astrophysics Data System (ADS)
Pokorný, P.; Novotný, M.; Fitl, P.; Zuklín, J.; Vlček, J.; Nikl, J.; Marešová, E.; Hruška, P.; Bulíř, J.; Drahokoupil, J.; Čerňanský, M.; Lančok, J.
2018-06-01
The purpose of the design, construction and implementation of vacuum apparatus for measuring simultaneously three or more stimulated phenomena in dielectrics and eventually semiconductors is to investigate those phenomena as a function of temperature and wavelength. The test of equipment and its functionality were carried out step by step (apparatus, components and control sample) and associated with the calculation of the main physical parameters. The tests of individual parts of the apparatus clearly confirmed that the design, construction and selected components fulfil or even exceed the required properties. On the basis of the measurement of selected sample, it was shown that even weak signals from the material can be detected from both thermally stimulated luminescence and thermally stimulated exo-electron emission moreover transmission and desorption can be measured. NaCl:Ni (0.2%) was chosen as the test material. The activation energies and frequency factor were calculated using the methods of different authors.
Ordered transport and identification of particles
Shera, E. Brooks
1993-01-01
A method and apparatus are provided for application of electrical field gradients to induce particle velocities to enable particle sequence and identification information to be obtained. Particle sequence is maintained by providing electroosmotic flow for an electrolytic solution in a particle transport tube. The transport tube and electrolytic solution are selected to provide an electroosmotic radius of >100 so that a plug flow profile is obtained for the electrolytic solution in the transport tube. Thus, particles are maintained in the same order in which they are introduced in the transport tube. When the particles also have known electrophoretic velocities, the field gradients introduce an electrophoretic velocity component onto the electroosmotic velocity. The time that the particles pass selected locations along the transport tube may then be detected and the electrophoretic velocity component calculated for particle identification. One particular application is the ordered transport and identification of labeled nucleotides sequentially cleaved from a strand of DNA.
Creation of system of computer-aided design for technological objects
NASA Astrophysics Data System (ADS)
Zubkova, T. M.; Tokareva, M. A.; Sultanov, N. Z.
2018-05-01
Due to the competition in the market of process equipment, its production should be flexible, retuning to various product configurations, raw materials and productivity, depending on the current market needs. This process is not possible without CAD (computer-aided design). The formation of CAD begins with planning. Synthesizing, analyzing, evaluating, converting operations, as well as visualization and decision-making operations, can be automated. Based on formal description of the design procedures, the design route in the form of an oriented graph is constructed. The decomposition of the design process, represented by the formalized description of the design procedures, makes it possible to make an informed choice of the CAD component for the solution of the task. The object-oriented approach allows us to consider the CAD as an independent system whose properties are inherited from the components. The first step determines the range of tasks to be performed by the system, and a set of components for their implementation. The second one is the configuration of the selected components. The interaction between the selected components is carried out using the CALS standards. The chosen CAD / CAE-oriented approach allows creating a single model, which is stored in the database of the subject area. Each of the integration stages is implemented as a separate functional block. The transformation of the CAD model into the model of the internal representation is realized by the block of searching for the geometric parameters of the technological machine, in which the XML-model of the construction is obtained on the basis of the feature method from the theory of image recognition. The configuration of integrated components is divided into three consecutive steps: configuring tasks, components, interfaces. The configuration of the components is realized using the theory of "soft computations" using the Mamdani fuzzy inference algorithm.
The causal pie model: an epidemiological method applied to evolutionary biology and ecology
Wensink, Maarten; Westendorp, Rudi G J; Baudisch, Annette
2014-01-01
A general concept for thinking about causality facilitates swift comprehension of results, and the vocabulary that belongs to the concept is instrumental in cross-disciplinary communication. The causal pie model has fulfilled this role in epidemiology and could be of similar value in evolutionary biology and ecology. In the causal pie model, outcomes result from sufficient causes. Each sufficient cause is made up of a “causal pie” of “component causes”. Several different causal pies may exist for the same outcome. If and only if all component causes of a sufficient cause are present, that is, a causal pie is complete, does the outcome occur. The effect of a component cause hence depends on the presence of the other component causes that constitute some causal pie. Because all component causes are equally and fully causative for the outcome, the sum of causes for some outcome exceeds 100%. The causal pie model provides a way of thinking that maps into a number of recurrent themes in evolutionary biology and ecology: It charts when component causes have an effect and are subject to natural selection, and how component causes affect selection on other component causes; which partitions of outcomes with respect to causes are feasible and useful; and how to view the composition of a(n apparently homogeneous) population. The diversity of specific results that is directly understood from the causal pie model is a test for both the validity and the applicability of the model. The causal pie model provides a common language in which results across disciplines can be communicated and serves as a template along which future causal analyses can be made. PMID:24963386
The causal pie model: an epidemiological method applied to evolutionary biology and ecology.
Wensink, Maarten; Westendorp, Rudi G J; Baudisch, Annette
2014-05-01
A general concept for thinking about causality facilitates swift comprehension of results, and the vocabulary that belongs to the concept is instrumental in cross-disciplinary communication. The causal pie model has fulfilled this role in epidemiology and could be of similar value in evolutionary biology and ecology. In the causal pie model, outcomes result from sufficient causes. Each sufficient cause is made up of a "causal pie" of "component causes". Several different causal pies may exist for the same outcome. If and only if all component causes of a sufficient cause are present, that is, a causal pie is complete, does the outcome occur. The effect of a component cause hence depends on the presence of the other component causes that constitute some causal pie. Because all component causes are equally and fully causative for the outcome, the sum of causes for some outcome exceeds 100%. The causal pie model provides a way of thinking that maps into a number of recurrent themes in evolutionary biology and ecology: It charts when component causes have an effect and are subject to natural selection, and how component causes affect selection on other component causes; which partitions of outcomes with respect to causes are feasible and useful; and how to view the composition of a(n apparently homogeneous) population. The diversity of specific results that is directly understood from the causal pie model is a test for both the validity and the applicability of the model. The causal pie model provides a common language in which results across disciplines can be communicated and serves as a template along which future causal analyses can be made.
Analysis of the essential oils of Alpiniae Officinarum Hance in different extraction methods
NASA Astrophysics Data System (ADS)
Yuan, Y.; Lin, L. J.; Huang, X. B.; Li, J. H.
2017-09-01
It was developed for the analysis of the essential oils of Alpiniae Officinarum Hance extracted by steam distillation (SD), ultrasonic assisted solvent extraction (UAE) and supercritical fluid extraction (SFE) via gas chromatography mass spectrometry (GC-MS) combined with retention index (RI) method. There were multiple volatile components of the oils extracted by the three above-mention methods respectively identified; meanwhile, each one was quantified by area normalization method. The results indicated that the content of 1,8-Cineole, the index constituent, by SD was similar as SFE, and higher than UAE. Although UAE was less time consuming and consumed less energy, the oil quality was poorer due to the use of organic solvents was hard to degrade. In addition, some constituents could be obtained by SFE but could not by SD. In conclusion, essential oil of different extraction methods from the same batch of materials had been proved broadly similarly, however, there were some differences in composition and component ratio. Therefore, development and utilization of different extraction methods must be selected according to the functional requirements of products.
Improvement of the material and transport component of the system of construction waste management
NASA Astrophysics Data System (ADS)
Kostyshak, Mikhail; Lunyakov, Mikhail
2017-10-01
Relevance of the topic of selected research is conditioned with the growth of construction operations and growth rates of construction and demolition wastes. This article considers modern approaches to the management of turnover of construction waste, sequence of reconstruction or demolition processes of the building, information flow of the complete cycle of turnover of construction and demolition waste, methods for improvement of the material and transport component of the construction waste management system. Performed analysis showed that mechanism of management of construction waste allows to increase efficiency and environmental safety of this branch and regions.
Statistical learning and selective inference.
Taylor, Jonathan; Tibshirani, Robert J
2015-06-23
We describe the problem of "selective inference." This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have "cherry-picked"--searched for the strongest associations--means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.
Multi-focus image fusion based on window empirical mode decomposition
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao
2017-09-01
In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Grahn, A.R.
1993-05-11
A force sensor and related method for determining force components is described. The force sensor includes a deformable medium having a contact surface against which a force can be applied, a signal generator for generating signals that travel through the deformable medium to the contact surface, a signal receptor for receiving the signal reflected from the contact surface, a generation controller, a reception controller, and a force determination apparatus. The signal generator has one or more signal generation regions for generating the signals. The generation controller selects and activates the signal generation regions. The signal receptor has one or more signal reception regions for receiving signals and for generating detections signals in response thereto. The reception controller selects signal reception regions and detects the detection signals. The force determination apparatus measures signal transit time by timing activation and detection and, optionally, determines force components for selected cross-field intersections. The timer which times by activation and detection can be any means for measuring signal transit time. A cross-field intersection is defined by the overlap of a signal generation region and a signal reception region.
Grahn, Allen R.
1993-01-01
A force sensor and related method for determining force components. The force sensor includes a deformable medium having a contact surface against which a force can be applied, a signal generator for generating signals that travel through the deformable medium to the contact surface, a signal receptor for receiving the signal reflected from the contact surface, a generation controller, a reception controller, and a force determination apparatus. The signal generator has one or more signal generation regions for generating the signals. The generation controller selects and activates the signal generation regions. The signal receptor has one or more signal reception regions for receiving signals and for generating detections signals in response thereto. The reception controller selects signal reception regions and detects the detection signals. The force determination apparatus measures signal transit time by timing activation and detection and, optionally, determines force components for selected cross-field intersections. The timer which times by activation and detection can be any means for measuring signal transit time. A cross-field intersection is defined by the overlap of a signal generation region and a signal reception region.
Ghate, Madhav R.; Yang, Ralph T.
1987-01-01
Bulk separation of the gaseous components of multi-component gases provided by the gasification of coal including hydrogen, carbon monoxide, methane, and acid gases (carbon dioxide plus hydrogen sulfide) are selectively adsorbed by a pressure swing adsorption technique using activated carbon, zeolite or a combination thereof as the adsorbent. By charging a column containing the adsorbent with a gas mixture and pressurizing the column to a pressure sufficient to cause the adsorption of the gases and then reducing the partial pressure of the contents of the column, the gases are selectively and sequentially desorbed. Hydrogen, the least absorbable gas of the gaseous mixture, is the first gas to be desorbed and is removed from the column in a co-current direction followed by the carbon monoxide, hydrogen and methane. With the pressure in the column reduced to about atmospheric pressure the column is evacuated in a countercurrent direction to remove the acid gases from the column. The present invention is particularly advantageous as a producer of high parity hydrogen from gaseous products of coal gasification and as an acid gas scrubber.
Quality and methodological challenges in Internet-based mental health trials.
Ye, Xibiao; Bapuji, Sunita Bayyavarapu; Winters, Shannon; Metge, Colleen; Raynard, Mellissa
2014-08-01
To review the quality of Internet-based mental health intervention studies and their methodological challenges. We searched multiple literature databases to identify relevant studies according to the Population, Interventions, Comparators, Outcomes, and Study Design framework. Two reviewers independently assessed selection bias, allocation bias, confounding bias, blinding, data collection methods, and withdrawals/dropouts, using the Quality Assessment Tool for Quantitative Studies. We rated each component as strong, moderate, or weak and assigned a global rating (strong, moderate, or weak) to each study. We discussed methodological issues related to the study quality. Of 122 studies included, 31 (25%), 44 (36%), and 47 (39%) were rated strong, moderate, and weak, respectively. Only five studies were rated strong for all of the six quality components (three of them were published by the same group). Lack of blinding, selection bias, and low adherence were the top three challenges in Internet-based mental health intervention studies. The overall quality of Internet-based mental health intervention needs to improve. In particular, studies need to improve sample selection, intervention allocation, and blinding.
NASA Astrophysics Data System (ADS)
Ab. Aziz, Norshakirah; Ahmad, Rohiza; Dhanapal Durai, Dominic
2011-12-01
Limited trust, cooperation and communication have been identified as some of the issues that hinder collaboration among business partners. These one also true in the acceptance of e-supply chain integrator among organizations that involve in the same industry. On top of that, the huge number of components in supply chain industry also makes it impossible to include entire supply chain components in the integrator. Hence, this study intends to propose a method for identifying "trusted" collaborators for inclusion into an e-supply chain integrator. For the purpose of constructing and validating the method, the Malaysian construction industry is chosen as the case study due to its size and importance to the economy. This paper puts forward the background of the research, some relevant literatures which lead to trust values elements formulation, data collection from Malaysian Construction Supply Chain and a glimpse of the proposed method for trusted partner selection. Future work is also presented to highlight the next step of this research.
Assessment of technological level of stem cell research using principal component analysis.
Do Cho, Sung; Hwan Hyun, Byung; Kim, Jae Kyeom
2016-01-01
In general, technological levels have been assessed based on specialist's opinion through the methods such as Delphi. But in such cases, results could be significantly biased per study design and individual expert. In this study, therefore scientific literatures and patents were selected by means of analytic indexes for statistic approach and technical assessment of stem cell fields. The analytic indexes, numbers and impact indexes of scientific literatures and patents, were weighted based on principal component analysis, and then, were summated into the single value. Technological obsolescence was calculated through the cited half-life of patents issued by the United States Patents and Trademark Office and was reflected in technological level assessment. As results, ranks of each nation's in reference to the technology level were rated by the proposed method. Furthermore we were able to evaluate strengthens and weaknesses thereof. Although our empirical research presents faithful results, in the further study, there is a need to compare the existing methods and the suggested method.
NASA Astrophysics Data System (ADS)
Anees, Amir; Khan, Waqar Ahmad; Gondal, Muhammad Asif; Hussain, Iqtadar
2013-07-01
The aim of this work is to make use of the mean of absolute deviation (MAD) method for the evaluation process of substitution boxes used in the advanced encryption standard. In this paper, we use the MAD technique to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, MAD is applied to advanced encryption standard (AES), affine power affine (APA), Gray, Lui J., Residue Prime, S8 AES, SKIPJACK, and Xyi substitution boxes.
Intelligent Visual Input: A Graphical Method for Rapid Entry of Patient-Specific Data
Bergeron, Bryan P.; Greenes, Robert A.
1987-01-01
Intelligent Visual Input (IVI) provides a rapid, graphical method of data entry for both expert system interaction and medical record keeping purposes. Key components of IVI include: a high-resolution graphic display; an interface supportive of rapid selection, i.e., one utilizing a mouse or light pen; algorithm simplification modules; and intelligent graphic algorithm expansion modules. A prototype IVI system, designed to facilitate entry of physical exam findings, is used to illustrates the potential advantages of this approach.
2010-01-01
Background Cluster analysis, and in particular hierarchical clustering, is widely used to extract information from gene expression data. The aim is to discover new classes, or sub-classes, of either individuals or genes. Performing a cluster analysis commonly involve decisions on how to; handle missing values, standardize the data and select genes. In addition, pre-processing, involving various types of filtration and normalization procedures, can have an effect on the ability to discover biologically relevant classes. Here we consider cluster analysis in a broad sense and perform a comprehensive evaluation that covers several aspects of cluster analyses, including normalization. Result We evaluated 2780 cluster analysis methods on seven publicly available 2-channel microarray data sets with common reference designs. Each cluster analysis method differed in data normalization (5 normalizations were considered), missing value imputation (2), standardization of data (2), gene selection (19) or clustering method (11). The cluster analyses are evaluated using known classes, such as cancer types, and the adjusted Rand index. The performances of the different analyses vary between the data sets and it is difficult to give general recommendations. However, normalization, gene selection and clustering method are all variables that have a significant impact on the performance. In particular, gene selection is important and it is generally necessary to include a relatively large number of genes in order to get good performance. Selecting genes with high standard deviation or using principal component analysis are shown to be the preferred gene selection methods. Hierarchical clustering using Ward's method, k-means clustering and Mclust are the clustering methods considered in this paper that achieves the highest adjusted Rand. Normalization can have a significant positive impact on the ability to cluster individuals, and there are indications that background correction is preferable, in particular if the gene selection is successful. However, this is an area that needs to be studied further in order to draw any general conclusions. Conclusions The choice of cluster analysis, and in particular gene selection, has a large impact on the ability to cluster individuals correctly based on expression profiles. Normalization has a positive effect, but the relative performance of different normalizations is an area that needs more research. In summary, although clustering, gene selection and normalization are considered standard methods in bioinformatics, our comprehensive analysis shows that selecting the right methods, and the right combinations of methods, is far from trivial and that much is still unexplored in what is considered to be the most basic analysis of genomic data. PMID:20937082
Zu, Chen; Jie, Biao; Liu, Mingxia; Chen, Songcan
2015-01-01
Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI. PMID:26572145
Myers, James FM; Rosso, Lula; Watson, Ben J; Wilson, Sue J; Kalk, Nicola J; Clementi, Nicoletta; Brooks, David J; Nutt, David J; Turkheimer, Federico E; Lingford-Hughes, Anne R
2012-01-01
This positron emission tomography (PET) study aimed to further define selectivity of [11C]Ro15-4513 binding to the GABARα5 relative to the GABARα1 benzodiazepine receptor subtype. The impact of zolpidem, a GABARα1-selective agonist, on [11C]Ro15-4513, which shows selectivity for GABARα5, and the nonselective benzodiazepine ligand [11C]flumazenil binding was assessed in humans. Compartmental modelling of the kinetics of [11C]Ro15-4513 time-activity curves was used to describe distribution volume (VT) differences in regions populated by different GABA receptor subtypes. Those with low α5 were best fitted by one-tissue compartment models; and those with high α5 required a more complex model. The heterogeneity between brain regions suggested spectral analysis as a more appropriate method to quantify binding as it does not a priori specify compartments. Spectral analysis revealed that zolpidem caused a significant VT decrease (∼10%) in [11C]flumazenil, but no decrease in [11C]Ro15-4513 binding. Further analysis of [11C]Ro15-4513 kinetics revealed additional frequency components present in regions containing both α1 and α5 subtypes compared with those containing only α1. Zolpidem reduced one component (mean±s.d.: 71%±41%), presumed to reflect α1-subtype binding, but not another (13%±22%), presumed to reflect α5. The proposed method for [11C]Ro15-4513 analysis may allow more accurate selective binding assays and estimation of drug occupancy for other nonselective ligands. PMID:22214903
Nompari, Luca; Orlandini, Serena; Pasquini, Benedetta; Campa, Cristiana; Rovini, Michele; Del Bubba, Massimo; Furlanetto, Sandra
2018-02-01
Bexsero is the first approved vaccine for active immunization of individuals from 2 months of age and older to prevent invasive disease caused by Neisseria meningitidis serogroup B. The active components of the vaccine are Neisseria Heparin Binding Antigen, factor H binding protein, Neisseria adhesin A, produced in Escherichia coli cells by recombinant DNA technology, and Outer Membrane Vesicles (expressing Porin A and Porin B), produced by fermentation of Neisseria meningitidis strain NZ98/254. All the Bexsero active components are adsorbed on aluminum hydroxide and the unadsorbed antigens content is a product critical quality attribute. In this paper the development of a fast, selective and sensitive ultra-high-performance liquid chromatography (UHPLC) method for the determination of the Bexsero antigens in the vaccine supernatant is presented. For the first time in the literature, the Quality by Design (QbD) principles were applied to the development of an analytical method aimed to the quality control of a vaccine product. The UHPLC method was fully developed within the QbD framework, the new paradigm of quality outlined in International Conference on Harmonisation guidelines. Critical method attributes (CMAs) were identified with the capacity factor of Neisseria Heparin Binding Antigen, antigens resolution and peak areas. After a scouting phase, aimed at selecting a suitable and fast UHPLC operative mode for the vaccine antigens separation, risk assessment tools were employed to define the critical method parameters to be considered in the screening phase. Screening designs were applied for investigating at first the effects of vial type and sample concentration, and then the effects of injection volume, column type, organic phase starting concentration, ramp time and temperature. Response Surface Methodology pointed out the presence of several significant interaction effects, and with the support of Monte-Carlo simulations led to map out the design space, at a selected probability level, for the desired CMAs. The selected working conditions gave a complete separation of the antigens in about 5min. Robustness testing was carried out by a multivariate approach and a control strategy was implemented by defining system suitability tests. The method was qualified for the analysis of the Bexsero vaccine. Copyright © 2017 Elsevier B.V. All rights reserved.
Sohrabi, Mahmoud Reza; Tayefeh Zarkesh, Mahshid
2014-05-01
In the present paper, two spectrophotometric methods based on signal processing are proposed for the simultaneous determination of two components of an anti-HIV drug called lamivudine (LMV) and zidovudine (ZDV). The proposed methods are applied to synthetic binary mixtures and commercial pharmaceutical tablets without the need for any chemical separation procedures. The developed methods are based on the application of Continuous Wavelet Transform (CWT) and Derivative Spectrophotometry (DS) combined with the zero cross point technique. The Daubechies (db5) wavelet family (242 nm) and Dmey wavelet family (236 nm) were found to give the best results under optimum conditions for simultaneous analysis of lamivudine and zidovudine, respectively. In addition, the first derivative absorption spectra were selected for the determination of lamivudine and zidovudine at 266 nm and 248 nm, respectively. Assaying various synthetic mixtures of the components validated the presented methods. Mean recovery values were found to be between 100.31% and 100.2% for CWT and 99.42% and 97.37% for DS, respectively for determination of LMV and ZDV. The results obtained from analyzing the real samples by the proposed methods were compared to the HPLC reference method. One-way ANOVA test at 95% confidence level was applied to the results. The statistical data from comparing the proposed methods with the reference method showed no significant differences. Copyright © 2014 Elsevier B.V. All rights reserved.
Neveu, Pascaline; Priot, Anne-Emmanuelle; Philippe, Matthieu; Fuchs, Philippe; Roumes, Corinne
2015-09-01
Several tests are available to optometrists for investigating accommodation and vergence. This study sought to investigate the agreement between clinical and laboratory methods and to clarify which components are actually measured when tonic and cross-link of accommodation and vergence are assessed. Tonic vergence, tonic accommodation, accommodative vergence (AC/A) and vergence accommodation (CA/C) were measured using several tests. Clinical tests were compared to the laboratory assessment, the latter being regarded as an absolute reference. The repeatability of each test and the degree of agreement between the tests were quantified using Bland-Altman analysis. The values obtained for each test were found to be stable across repetitions; however, in most cases, significant differences were observed between tests supposed to measure the same oculomotor component. Tonic and cross-link components cannot be easily assessed because proximal and instrumental responses interfere with the assessment. Other components interfere with oculomotor assessment. Specifically, accommodative divergence interferes with tonic vergence estimation and the type of accommodation considered in the AC/A ratio affects its magnitude. Results on clinical tonic accommodation and clinical CA/C show that further investigation is needed to clarify the limitations associated with the use of difference of Gaussian as visual targets to open the accommodative loop. Although different optometric tests of accommodation and vergence rely on the same basic principles, the results of this study indicate that clinical and laboratory methods actually involve distinct components. These differences, which are induced by methodological choices, must be taken into account, when comparing studies or when selecting a test to investigate a particular oculomotor component. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.
Large forging manufacturing process
Thamboo, Samuel V.; Yang, Ling
2002-01-01
A process for forging large components of Alloy 718 material so that the components do not exhibit abnormal grain growth includes the steps of: a) providing a billet with an average grain size between ASTM 0 and ASTM 3; b) heating the billet to a temperature of between 1750.degree. F. and 1800.degree. F.; c) upsetting the billet to obtain a component part with a minimum strain of 0.125 in at least selected areas of the part; d) reheating the component part to a temperature between 1750.degree. F. and 1800.degree. F.; e) upsetting the component part to a final configuration such that said selected areas receive no strains between 0.01 and 0.125; f) solution treating the component part at a temperature of between 1725.degree. F. and 1750.degree. F.; and g) aging the component part over predetermined times at different temperatures. A modified process achieves abnormal grain growth in selected areas of a component where desirable.
Integrated fluorescence analysis system
Buican, Tudor N.; Yoshida, Thomas M.
1992-01-01
An integrated fluorescence analysis system enables a component part of a sample to be virtually sorted within a sample volume after a spectrum of the component part has been identified from a fluorescence spectrum of the entire sample in a flow cytometer. Birefringent optics enables the entire spectrum to be resolved into a set of numbers representing the intensity of spectral components of the spectrum. One or more spectral components are selected to program a scanning laser microscope, preferably a confocal microscope, whereby the spectrum from individual pixels or voxels in the sample can be compared. Individual pixels or voxels containing the selected spectral components are identified and an image may be formed to show the morphology of the sample with respect to only those components having the selected spectral components. There is no need for any physical sorting of the sample components to obtain the morphological information.
Knowledge-based reusable software synthesis system
NASA Technical Reports Server (NTRS)
Donaldson, Cammie
1989-01-01
The Eli system, a knowledge-based reusable software synthesis system, is being developed for NASA Langley under a Phase 2 SBIR contract. Named after Eli Whitney, the inventor of interchangeable parts, Eli assists engineers of large-scale software systems in reusing components while they are composing their software specifications or designs. Eli will identify reuse potential, search for components, select component variants, and synthesize components into the developer's specifications. The Eli project began as a Phase 1 SBIR to define a reusable software synthesis methodology that integrates reusabilityinto the top-down development process and to develop an approach for an expert system to promote and accomplish reuse. The objectives of the Eli Phase 2 work are to integrate advanced technologies to automate the development of reusable components within the context of large system developments, to integrate with user development methodologies without significant changes in method or learning of special languages, and to make reuse the easiest operation to perform. Eli will try to address a number of reuse problems including developing software with reusable components, managing reusable components, identifying reusable components, and transitioning reuse technology. Eli is both a library facility for classifying, storing, and retrieving reusable components and a design environment that emphasizes, encourages, and supports reuse.
Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li
2011-01-01
Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice. PMID:21359184
Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li
2011-02-16
Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.