DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Yong Joon; Yoo, Jun Soo; Smith, Curtis Lee
2015-09-01
This INL plan comprehensively describes the Requirements Traceability Matrix (RTM) on main physics and numerical method of the RELAP-7. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7.
Establishing Quantitative Software Metrics in Department of the Navy Programs
2016-04-01
13 Quality to Metrics Dependency Matrix...11 7. Quality characteristics to metrics dependecy matrix...In accomplishing this goal, a need exists for a formalized set of software quality metrics . This document establishes the validity of those necessary
Adolescent Personality: A Five-Factor Model Construct Validation
ERIC Educational Resources Information Center
Baker, Spencer T.; Victor, James B.; Chambers, Anthony L.; Halverson, Jr., Charles F.
2004-01-01
The purpose of this study was to investigate convergent and discriminant validity of the five-factor model of adolescent personality in a school setting using three different raters (methods): self-ratings, peer ratings, and teacher ratings. The authors investigated validity through a multitrait-multimethod matrix and a confirmatory factor…
Study on the Algorithm of Judgment Matrix in Analytic Hierarchy Process
NASA Astrophysics Data System (ADS)
Lu, Zhiyong; Qin, Futong; Jin, Yican
2017-10-01
A new algorithm is proposed for the non-consistent judgment matrix in AHP. A primary judgment matrix is generated firstly through pre-ordering the targeted factor set, and a compared matrix is built through the top integral function. Then a relative error matrix is created by comparing the compared matrix with the primary judgment matrix which is regulated under the control of the relative error matrix and the dissimilar degree of the matrix step by step. Lastly, the targeted judgment matrix is generated to satisfy the requirement of consistence and the least dissimilar degree. The feasibility and validity of the proposed method are verified by simulation results.
A unique set of micromechanics equations for high temperature metal matrix composites
NASA Technical Reports Server (NTRS)
Hopkins, D. A.; Chamis, C. C.
1985-01-01
A unique set of micromechanic equations is presented for high temperature metal matrix composites. The set includes expressions to predict mechanical properties, thermal properties and constituent microstresses for the unidirectional fiber reinforced ply. The equations are derived based on a mechanics of materials formulation assuming a square array unit cell model of a single fiber, surrounding matrix and an interphase to account for the chemical reaction which commonly occurs between fiber and matrix. A three-dimensional finite element analysis was used to perform a preliminary validation of the equations. Excellent agreement between properties predicted using the micromechanics equations and properties simulated by the finite element analyses are demonstrated. Implementation of the micromechanics equations as part of an integrated computational capability for nonlinear structural analysis of high temperature multilayered fiber composites is illustrated.
Prediction of Metastasis Using Second Harmonic Generation
2016-07-01
extracellular matrix through which metastasizing cells must travel. We and others have demonstrated that tumor collagen structure, as measured with the...algorithm using separate training and validation sets, etc. Keywords: metastasis, overtreatment, extracellular matrix , collagen , second harmonic...optical process called second harmonic generation (SHG), influences tumor metastasis. This suggests that collagen structure may provide prognostic
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Du, Jinglei
2018-06-01
A method of selecting appropriate singular values of the transmission matrix to improve the precision of incident wavefront retrieval in focusing light through scattering media is proposed. The optimal singular values selected by this method can reduce the degree of ill-conditionedness of the transmission matrix effectively, which indicates that the incident wavefront retrieved from the optimal set of singular values is more accurate than the incident wavefront retrieved from other sets of singular values. The validity of this method is verified by numerical simulation and actual measurements of the incident wavefront of coherent light through ground glass.
Visualisation Enhancement of HoloCatT Matrix
NASA Astrophysics Data System (ADS)
Rosli, Nor Azlin; Mohamed, Azlinah; Khan, Rahmattullah
Graphology and personality psychology are two different analyses approach perform by two different groups of people, but addresses the personality of the person that were analyzed. It is of interest to visualize a system that would aid personality identification given information visualization of these two domains. Therefore, a research in identifying the relationship between those two domains has been carried out by producing the HoloCatT Matrix, a combination of graphology features and a selected personality traits approach. The objectives of this research are to identify new features of the existing HoloCatT Matrix and validate the new version of matrix with two (2) related group of experts. A set of questionnaire has been distributed to a group of Personologist to identify the relationship and an interview has been done with a Graphologist in validating the matrix. Based on the analysis, 87.5% of the relation confirmed by both group of experts and subsequently the third (3rd) version of HoloCatT Matrix is obtained.
Degree of coupling and efficiency of energy converters far-from-equilibrium
NASA Astrophysics Data System (ADS)
Vroylandt, Hadrien; Lacoste, David; Verley, Gatien
2018-02-01
In this paper, we introduce a real symmetric and positive semi-definite matrix, which we call the non-equilibrium conductance matrix, and which generalizes the Onsager response matrix for a system in a non-equilibrium stationary state. We then express the thermodynamic efficiency in terms of the coefficients of this matrix using a parametrization similar to the one used near equilibrium. This framework, then valid arbitrarily far from equilibrium allows to set bounds on the thermodynamic efficiency by a universal function depending only on the degree of coupling between input and output currents. It also leads to new general power-efficiency trade-offs valid for macroscopic machines that are compared to trade-offs previously obtained from uncertainty relations. We illustrate our results on an unicycle heat to heat converter and on a discrete model of a molecular motor.
The High School & Beyond Data Set: Academic Self-Concept Measures.
ERIC Educational Resources Information Center
Strein, William
A series of confirmatory factor analyses using both LISREL VI (maximum likelihood method) and LISCOMP (weighted least squares method using covariance matrix based on polychoric correlations) and including cross-validation on independent samples were applied to items from the High School and Beyond data set to explore the measurement…
A Method of Q-Matrix Validation for the Linear Logistic Test Model
Baghaei, Purya; Hohensinn, Christine
2017-01-01
The linear logistic test model (LLTM) is a well-recognized psychometric model for examining the components of difficulty in cognitive tests and validating construct theories. The plausibility of the construct model, summarized in a matrix of weights, known as the Q-matrix or weight matrix, is tested by (1) comparing the fit of LLTM with the fit of the Rasch model (RM) using the likelihood ratio (LR) test and (2) by examining the correlation between the Rasch model item parameters and LLTM reconstructed item parameters. The problem with the LR test is that it is almost always significant and, consequently, LLTM is rejected. The drawback of examining the correlation coefficient is that there is no cut-off value or lower bound for the magnitude of the correlation coefficient. In this article we suggest a simulation method to set a minimum benchmark for the correlation between item parameters from the Rasch model and those reconstructed by the LLTM. If the cognitive model is valid then the correlation coefficient between the RM-based item parameters and the LLTM-reconstructed item parameters derived from the theoretical weight matrix should be greater than those derived from the simulated matrices. PMID:28611721
Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards
2013-01-01
Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less
NASA Astrophysics Data System (ADS)
Qin, Xulei; Cong, Zhibin; Fei, Baowei
2013-11-01
An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
Validating a Monotonically-Integrated Large Eddy Simulation Code for Subsonic Jet Acoustics
NASA Technical Reports Server (NTRS)
Ingraham, Daniel; Bridges, James
2017-01-01
The results of subsonic jet validation cases for the Naval Research Lab's Jet Engine Noise REduction (JENRE) code are reported. Two set points from the Tanna matrix, set point 3 (Ma = 0.5, unheated) and set point 7 (Ma = 0.9, unheated) are attempted on three different meshes. After a brief discussion of the JENRE code and the meshes constructed for this work, the turbulent statistics for the axial velocity are presented and compared to experimental data, with favorable results. Preliminary simulations for set point 23 (Ma = 0.5, Tj=T1 = 1.764) on one of the meshes are also described. Finally, the proposed configuration for the farfield noise prediction with JENRE's Ffowcs-Williams Hawking solver are detailed.
Robert, Christelle; Brasseur, Pierre-Yves; Dubois, Michel; Delahaut, Philippe; Gillard, Nathalie
2016-08-01
A new multi-residue method for the analysis of veterinary drugs, namely amoxicillin, chlortetracycline, colistins A and B, doxycycline, fenbendazole, flubendazole, ivermectin, lincomycin, oxytetracycline, sulfadiazine, tiamulin, tilmicosin and trimethoprim, was developed and validated for feed. After acidic extraction, the samples were centrifuged, purified by SPE and analysed by ultra-high-performance liquid chromatography coupled to tandem mass spectrometry. Quantitative validation was done in accordance with the guidelines laid down in European Commission Decision 2002/657/CE. Matrix-matched calibration with internal standards was used to reduce matrix effects. The target level was set at the authorised carryover level (1%) and validation levels were set at 0.5%, 1% and 1.5%. Method performances were evaluated by the following parameters: linearity (0.986 < R(2) < 0.999), precision (repeatability < 12.4% and reproducibility < 14.0%), accuracy (89% < recovery < 107%), sensitivity, decision limit (CCα), detection capability (CCβ), selectivity and expanded measurement uncertainty (k = 2).This method has been used successfully for three years for routine monitoring of antibiotic residues in feeds during which period 20% of samples were found to exceed the 1% authorised carryover limit and were deemed non-compliant.
NASA Astrophysics Data System (ADS)
Qin, Xulei; Cong, Zhibin; Halig, Luma V.; Fei, Baowei
2013-03-01
An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%+/-2.3% and 83.6+/-7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
NASA Astrophysics Data System (ADS)
Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda
2017-09-01
In this study, excitation-emission matrix datasets, which have strong overlapping bands, were processed by using four different chemometric calibration algorithms consisting of parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares for the simultaneous quantitative estimation of valsartan and amlodipine besylate in tablets. In analyses, preliminary separation step was not used before the application of parallel factor analysis Tucker3, three-way partial least squares and unfolded partial least squares approaches for the analysis of the related drug substances in samples. Three-way excitation-emission matrix data array was obtained by concatenating excitation-emission matrices of the calibration set, validation set, and commercial tablet samples. The excitation-emission matrix data array was used to get parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares calibrations and to predict the amounts of valsartan and amlodipine besylate in samples. For all the methods, calibration and prediction of valsartan and amlodipine besylate were performed in the working concentration ranges of 0.25-4.50 μg/mL. The validity and the performance of all the proposed methods were checked by using the validation parameters. From the analysis results, it was concluded that the described two-way and three-way algorithmic methods were very useful for the simultaneous quantitative resolution and routine analysis of the related drug substances in marketed samples.
The effect of leverage and/or influential on structure-activity relationships.
Bolboacă, Sorana D; Jäntschi, Lorentz
2013-05-01
In the spirit of reporting valid and reliable Quantitative Structure-Activity Relationship (QSAR) models, the aim of our research was to assess how the leverage (analysis with Hat matrix, h(i)) and the influential (analysis with Cook's distance, D(i)) of QSAR models may reflect the models reliability and their characteristics. The datasets included in this research were collected from previously published papers. Seven datasets which accomplished the imposed inclusion criteria were analyzed. Three models were obtained for each dataset (full-model, h(i)-model and D(i)-model) and several statistical validation criteria were applied to the models. In 5 out of 7 sets the correlation coefficient increased when compounds with either h(i) or D(i) higher than the threshold were removed. Withdrawn compounds varied from 2 to 4 for h(i)-models and from 1 to 13 for D(i)-models. Validation statistics showed that D(i)-models possess systematically better agreement than both full-models and h(i)-models. Removal of influential compounds from training set significantly improves the model and is recommended to be conducted in the process of quantitative structure-activity relationships developing. Cook's distance approach should be combined with hat matrix analysis in order to identify the compounds candidates for removal.
Genomic predictability of single-step GBLUP for production traits in US Holstein
USDA-ARS?s Scientific Manuscript database
The objective of this study was to validate genomic predictability of single-step genomic BLUP for 305-day protein yield for US Holsteins. The genomic relationship matrix was created with the Algorithm of Proven and Young (APY) with 18,359 core animals. The full data set consisted of phenotypes coll...
Ongay, Sara; Hendriks, Gert; Hermans, Jos; van den Berge, Maarten; ten Hacken, Nick H T; van de Merbel, Nico C; Bischoff, Rainer
2014-01-24
In spite of the data suggesting the potential of urinary desmosine (DES) and isodesmosine (IDS) as biomarkers for elevated lung elastic fiber turnover, further validation in large-scale studies of COPD populations, as well as the analysis of longitudinal samples is required. Validated analytical methods that allow the accurate and precise quantification of DES and IDS in human urine are mandatory in order to properly evaluate the outcome of such clinical studies. In this work, we present the development and full validation of two methods that allow DES and IDS measurement in human urine, one for the free and one for the total (free+peptide-bound) forms. To this end we compared the two principle approaches that are used for the absolute quantification of endogenous compounds in biological samples, analysis against calibrators containing authentic analyte in surrogate matrix or containing surrogate analyte in authentic matrix. The validated methods were employed for the analysis of a small set of samples including healthy never-smokers, healthy current-smokers and COPD patients. This is the first time that the analysis of urinary free DES, free IDS, total DES, and total IDS has been fully validated and that the surrogate analyte approach has been evaluated for their quantification in biological samples. Results indicate that the presented methods have the necessary quality and level of validation to assess the potential of urinary DES and IDS levels as biomarkers for the progression of COPD and the effect of therapeutic interventions. Copyright © 2014 Elsevier B.V. All rights reserved.
Jia, Hongjun; Martinez, Aleix M
2009-05-01
The task of finding a low-rank (r) matrix that best fits an original data matrix of higher rank is a recurring problem in science and engineering. The problem becomes especially difficult when the original data matrix has some missing entries and contains an unknown additive noise term in the remaining elements. The former problem can be solved by concatenating a set of r-column matrices that share a common single r-dimensional solution space. Unfortunately, the number of possible submatrices is generally very large and, hence, the results obtained with one set of r-column matrices will generally be different from that captured by a different set. Ideally, we would like to find that solution that is least affected by noise. This requires that we determine which of the r-column matrices (i.e., which of the original feature points) are less influenced by the unknown noise term. This paper presents a criterion to successfully carry out such a selection. Our key result is to formally prove that the more distinct the r vectors of the r-column matrices are, the less they are swayed by noise. This key result is then combined with the use of a noise model to derive an upper bound for the effect that noise and occlusions have on each of the r-column matrices. It is shown how this criterion can be effectively used to recover the noise-free matrix of rank r. Finally, we derive the affine and projective structure-from-motion (SFM) algorithms using the proposed criterion. Extensive validation on synthetic and real data sets shows the superiority of the proposed approach over the state of the art.
Robust validation of approximate 1-matrix functionals with few-electron harmonium atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cioslowski, Jerzy, E-mail: jerzy@wmf.univ.szczecin.pl; Piris, Mario; Matito, Eduard
2015-12-07
A simple comparison between the exact and approximate correlation components U of the electron-electron repulsion energy of several states of few-electron harmonium atoms with varying confinement strengths provides a stringent validation tool for 1-matrix functionals. The robustness of this tool is clearly demonstrated in a survey of 14 known functionals, which reveals their substandard performance within different electron correlation regimes. Unlike spot-testing that employs dissociation curves of diatomic molecules or more extensive benchmarking against experimental atomization energies of molecules comprising some standard set, the present approach not only uncovers the flaws and patent failures of the functionals but, even moremore » importantly, also allows for pinpointing their root causes. Since the approximate values of U are computed at exact 1-densities, the testing requires minimal programming and thus is particularly suitable for rapid screening of new functionals.« less
Identifying FGA peptides as nasopharyngeal carcinoma-associated biomarkers by magnetic beads.
Tao, Ya-Lan; Li, Yan; Gao, Jin; Liu, Zhi-Gang; Tu, Zi-Wei; Li, Guo; Xu, Bing-Qing; Niu, Dao-Li; Jiang, Chang-Bin; Yi, Wei; Li, Zhi-Qiang; Li, Jing; Wang, Yi-Ming; Cheng, Zhi-Bin; Liu, Qiao-Dan; Bai, Li; Zhang, Chun; Zhang, Jing-Yu; Zeng, Mu-Sheng; Xia, Yun-Fei
2012-07-01
Early diagnosis and treatment is known to improve prognosis for nasopharyngeal carcinoma (NPC). The study determined the specific peptide profiles by comparing the serum differences between NPC patients and healthy controls, and provided the basis for the diagnostic model and identification of specific biomarkers of NPC. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) can be used to detect the molecular mass of peptides. Mass spectra of peptides were generated after extracting and purification of 40 NPC samples in the training set, 21 in the single center validation set and 99 in the multicenter validation set using weak cationic-exchanger magnetic beads. The spectra were analyzed statistically using FlexAnalysis™ and ClinProt™ bioinformatics software. The four most significant peaks were selected out to train a genetic algorithm model to diagnose NPC. The diagnostic sensitivity and specificity were 100% and 100% in the training set, 90.5% and 88.9% in the single center validation set, 91.9% and 83.3% in the multicenter validation set, and the false positive rate (FPR) and false negative rate (FNR) were obviously lower in the NPC group (FPR, 16.7%; FNR, 8.1%) than in the other cancer group (FPR, 39%; FNR, 61%), respectively. So, the diagnostic model including four peptides can be suitable for NPC but not for other cancers. FGA peptide fragments identified may serve as tumor-associated biomarkers for NPC. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Yan, Wen-juan; Yang, Ming; He, Guo-quan; Qin, Lin; Li, Gang
2014-11-01
In order to identify the diabetic patients by using tongue near-infrared (NIR) spectrum - a spectral classification model of the NIR reflectivity of the tongue tip is proposed, based on the partial least square (PLS) method. 39sample data of tongue tip's NIR spectra are harvested from healthy people and diabetic patients , respectively. After pretreatment of the reflectivity, the spectral data are set as the independent variable matrix, and information of classification as the dependent variables matrix, Samples were divided into two groups - i.e. 53 samples as calibration set and 25 as prediction set - then the PLS is used to build the classification model The constructed modelfrom the 53 samples has the correlation of 0.9614 and the root mean square error of cross-validation (RMSECV) of 0.1387.The predictions for the 25 samples have the correlation of 0.9146 and the RMSECV of 0.2122.The experimental result shows that the PLS method can achieve good classification on features of healthy people and diabetic patients.
An invariant asymptotic formula for solutions of second-order linear ODE's
NASA Technical Reports Server (NTRS)
Gingold, H.
1988-01-01
An invariant-matrix technique for the approximate solution of second-order ordinary differential equations (ODEs) of form y-double-prime = phi(x)y is developed analytically and demonstrated. A set of linear transformations for the companion matrix differential system is proposed; the diagonalization procedure employed in the final stage of the asymptotic decomposition is explained; and a scalar formulation of solutions for the ODEs is obtained. Several typical ODEs are analyzed, and it is shown that the Liouville-Green or WKB approximation is a special case of the present formula, which provides an approximation which is valid for the entire interval (0, infinity).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B.
The precision of double-beta ββ-decay experimental half lives and their uncertainties is reanalyzed. The method of Benford's distributions has been applied to nuclear reaction, structure and decay data sets. First-digit distribution trend for ββ-decay T 2v 1/2 is consistent with large nuclear reaction and structure data sets and provides validation of experimental half-lives. A complementary analysis of the decay uncertainties indicates deficiencies due to small size of statistical samples, and incomplete collection of experimental information. Further experimental and theoretical efforts would lead toward more precise values of-decay half-lives and nuclear matrix elements.
Limited-memory BFGS based least-squares pre-stack Kirchhoff depth migration
NASA Astrophysics Data System (ADS)
Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu
2015-08-01
Least-squares migration (LSM) is a linearized inversion technique for subsurface reflectivity estimation. Compared to conventional migration algorithms, it can improve spatial resolution significantly with a few iterative calculations. There are three key steps in LSM, (1) calculate data residuals between observed data and demigrated data using the inverted reflectivity model; (2) migrate data residuals to form reflectivity gradient and (3) update reflectivity model using optimization methods. In order to obtain an accurate and high-resolution inversion result, the good estimation of inverse Hessian matrix plays a crucial role. However, due to the large size of Hessian matrix, the inverse matrix calculation is always a tough task. The limited-memory BFGS (L-BFGS) method can evaluate the Hessian matrix indirectly using a limited amount of computer memory which only maintains a history of the past m gradients (often m < 10). We combine the L-BFGS method with least-squares pre-stack Kirchhoff depth migration. Then, we validate the introduced approach by the 2-D Marmousi synthetic data set and a 2-D marine data set. The results show that the introduced method can effectively obtain reflectivity model and has a faster convergence rate with two comparison gradient methods. It might be significant for general complex subsurface imaging.
Development and Validation of a Job Exposure Matrix for Physical Risk Factors in Low Back Pain
Solovieva, Svetlana; Pehkonen, Irmeli; Kausto, Johanna; Miranda, Helena; Shiri, Rahman; Kauppinen, Timo; Heliövaara, Markku; Burdorf, Alex; Husgafvel-Pursiainen, Kirsti; Viikari-Juntura, Eira
2012-01-01
Objectives The aim was to construct and validate a gender-specific job exposure matrix (JEM) for physical exposures to be used in epidemiological studies of low back pain (LBP). Materials and Methods We utilized two large Finnish population surveys, one to construct the JEM and another to test matrix validity. The exposure axis of the matrix included exposures relevant to LBP (heavy physical work, heavy lifting, awkward trunk posture and whole body vibration) and exposures that increase the biomechanical load on the low back (arm elevation) or those that in combination with other known risk factors could be related to LBP (kneeling or squatting). Job titles with similar work tasks and exposures were grouped. Exposure information was based on face-to-face interviews. Validity of the matrix was explored by comparing the JEM (group-based) binary measures with individual-based measures. The predictive validity of the matrix against LBP was evaluated by comparing the associations of the group-based (JEM) exposures with those of individual-based exposures. Results The matrix includes 348 job titles, representing 81% of all Finnish job titles in the early 2000s. The specificity of the constructed matrix was good, especially in women. The validity measured with kappa-statistic ranged from good to poor, being fair for most exposures. In men, all group-based (JEM) exposures were statistically significantly associated with one-month prevalence of LBP. In women, four out of six group-based exposures showed an association with LBP. Conclusions The gender-specific JEM for physical exposures showed relatively high specificity without compromising sensitivity. The matrix can therefore be considered as a valid instrument for exposure assessment in large-scale epidemiological studies, when more precise but more labour-intensive methods are not feasible. Although the matrix was based on Finnish data we foresee that it could be applicable, with some modifications, in other countries with a similar level of technology. PMID:23152793
Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.
Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong
2016-01-01
In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-02-26
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.
POLYMAT-C: a comprehensive SPSS program for computing the polychoric correlation matrix.
Lorenzo-Seva, Urbano; Ferrando, Pere J
2015-09-01
We provide a free noncommercial SPSS program that implements procedures for (a) obtaining the polychoric correlation matrix between a set of ordered categorical measures, so that it can be used as input for the SPSS factor analysis (FA) program; (b) testing the null hypothesis of zero population correlation for each element of the matrix by using appropriate simulation procedures; (c) obtaining valid and accurate confidence intervals via bootstrap resampling for those correlations found to be significant; and (d) performing, if necessary, a smoothing procedure that makes the matrix amenable to any FA estimation procedure. For the main purpose (a), the program uses a robust unified procedure that allows four different types of estimates to be obtained at the user's choice. Overall, we hope the program will be a very useful tool for the applied researcher, not only because it provides an appropriate input matrix for FA, but also because it allows the researcher to carefully check the appropriateness of the matrix for this purpose. The SPSS syntax, a short manual, and data files related to this article are available as Supplemental materials that are available for download with this article.
McArtor, Daniel B.; Lubke, Gitta H.; Bergeman, C. S.
2017-01-01
Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains. PMID:27738957
McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S
2017-12-01
Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.
Meyerson, Paul; Tryon, Warren W
2003-11-01
This study evaluated the psychometric equivalency of Web-based research. The Sexual Boredom Scale was presented via the World-Wide Web along with five additional scales used to validate it. A subset of 533 participants that matched a previously published sample (Watt & Ewing, 1996) on age, gender, and race was identified. An 8 x 8 correlation matrix from the matched Internet sample was compared via structural equation modeling with a similar 8 x 8 correlation matrix from the previously published study. The Internet and previously published samples were psychometrically equivalent. Coefficient alpha values calculated on the matched Internet sample yielded reliability coefficients almost identical to those for the previously published sample. Factors such as computer administration and uncontrollable administration settings did not appear to affect the results. Demographic data indicated an overrepresentation of males by about 6% and Caucasians by about 13% relative to the U.S. Census (2000). A total of 2,230 participants were obtained in about 8 months without remuneration. These results suggest that data collection on the Web is (1) reliable, (2) valid, (3) reasonably representative, (4) cost effective, and (5) efficient.
Burckhardt, Bjoern B.; Laeer, Stephanie
2015-01-01
In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum). Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers. PMID:25873972
Sankar, A S Kamatchi; Vetrichelvan, Thangarasu; Venkappaya, Devashya
2011-09-01
In the present work, three different spectrophotometric methods for simultaneous estimation of ramipril, aspirin and atorvastatin calcium in raw materials and in formulations are described. Overlapped data was quantitatively resolved by using chemometric methods, viz. inverse least squares (ILS), principal component regression (PCR) and partial least squares (PLS). Calibrations were constructed using the absorption data matrix corresponding to the concentration data matrix. The linearity range was found to be 1-5, 10-50 and 2-10 μg mL-1 for ramipril, aspirin and atorvastatin calcium, respectively. The absorbance matrix was obtained by measuring the zero-order absorbance in the wavelength range between 210 and 320 nm. A training set design of the concentration data corresponding to the ramipril, aspirin and atorvastatin calcium mixtures was organized statistically to maximize the information content from the spectra and to minimize the error of multivariate calibrations. By applying the respective algorithms for PLS 1, PCR and ILS to the measured spectra of the calibration set, a suitable model was obtained. This model was selected on the basis of RMSECV and RMSEP values. The same was applied to the prediction set and capsule formulation. Mean recoveries of the commercial formulation set together with the figures of merit (calibration sensitivity, selectivity, limit of detection, limit of quantification and analytical sensitivity) were estimated. Validity of the proposed approaches was successfully assessed for analyses of drugs in the various prepared physical mixtures and formulations.
Manifold Preserving: An Intrinsic Approach for Semisupervised Distance Metric Learning.
Ying, Shihui; Wen, Zhijie; Shi, Jun; Peng, Yaxin; Peng, Jigen; Qiao, Hong
2017-05-18
In this paper, we address the semisupervised distance metric learning problem and its applications in classification and image retrieval. First, we formulate a semisupervised distance metric learning model by considering the metric information of inner classes and interclasses. In this model, an adaptive parameter is designed to balance the inner metrics and intermetrics by using data structure. Second, we convert the model to a minimization problem whose variable is symmetric positive-definite matrix. Third, in implementation, we deduce an intrinsic steepest descent method, which assures that the metric matrix is strictly symmetric positive-definite at each iteration, with the manifold structure of the symmetric positive-definite matrix manifold. Finally, we test the proposed algorithm on conventional data sets, and compare it with other four representative methods. The numerical results validate that the proposed method significantly improves the classification with the same computational efficiency.
Fully Decentralized Semi-supervised Learning via Privacy-preserving Matrix Completion.
Fierimonte, Roberto; Scardapane, Simone; Uncini, Aurelio; Panella, Massimo
2016-08-26
Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by extending the framework of manifold regularization. The main component of the proposed algorithm consists of a fully distributed computation of the adjacency matrix of the training patterns. To this end, we propose a novel algorithm for low-rank distributed matrix completion, based on the framework of diffusion adaptation. Overall, the distributed Semi-supervised algorithm is efficient and scalable, and it can preserve privacy by the inclusion of flexible privacy-preserving mechanisms for similarity computation. The experimental results and comparison on a wide range of standard Semi-supervised benchmarks validate our proposal.
A Parallel Multiclassification Algorithm for Big Data Using an Extreme Learning Machine.
Duan, Mingxing; Li, Kenli; Liao, Xiangke; Li, Keqin
2018-06-01
As data sets become larger and more complicated, an extreme learning machine (ELM) that runs in a traditional serial environment cannot realize its ability to be fast and effective. Although a parallel ELM (PELM) based on MapReduce to process large-scale data shows more efficient learning speed than identical ELM algorithms in a serial environment, some operations, such as intermediate results stored on disks and multiple copies for each task, are indispensable, and these operations create a large amount of extra overhead and degrade the learning speed and efficiency of the PELMs. In this paper, an efficient ELM based on the Spark framework (SELM), which includes three parallel subalgorithms, is proposed for big data classification. By partitioning the corresponding data sets reasonably, the hidden layer output matrix calculation algorithm, matrix decomposition algorithm, and matrix decomposition algorithm perform most of the computations locally. At the same time, they retain the intermediate results in distributed memory and cache the diagonal matrix as broadcast variables instead of several copies for each task to reduce a large amount of the costs, and these actions strengthen the learning ability of the SELM. Finally, we implement our SELM algorithm to classify large data sets. Extensive experiments have been conducted to validate the effectiveness of the proposed algorithms. As shown, our SELM achieves an speedup on a cluster with ten nodes, and reaches a speedup with 15 nodes, an speedup with 20 nodes, a speedup with 25 nodes, a speedup with 30 nodes, and a speedup with 35 nodes.
The Importance of Method Selection in Determining Product Integrity for Nutrition Research1234
Mudge, Elizabeth M; Brown, Paula N
2016-01-01
The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. PMID:26980823
The Importance of Method Selection in Determining Product Integrity for Nutrition Research.
Mudge, Elizabeth M; Betz, Joseph M; Brown, Paula N
2016-03-01
The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. © 2016 American Society for Nutrition.
Speeding up the Consensus Clustering methodology for microarray data analysis
2011-01-01
Background The inference of the number of clusters in a dataset, a fundamental problem in Statistics, Data Analysis and Classification, is usually addressed via internal validation measures. The stated problem is quite difficult, in particular for microarrays, since the inferred prediction must be sensible enough to capture the inherent biological structure in a dataset, e.g., functionally related genes. Despite the rich literature present in that area, the identification of an internal validation measure that is both fast and precise has proved to be elusive. In order to partially fill this gap, we propose a speed-up of Consensus (Consensus Clustering), a methodology whose purpose is the provision of a prediction of the number of clusters in a dataset, together with a dissimilarity matrix (the consensus matrix) that can be used by clustering algorithms. As detailed in the remainder of the paper, Consensus is a natural candidate for a speed-up. Results Since the time-precision performance of Consensus depends on two parameters, our first task is to show that a simple adjustment of the parameters is not enough to obtain a good precision-time trade-off. Our second task is to provide a fast approximation algorithm for Consensus. That is, the closely related algorithm FC (Fast Consensus) that would have the same precision as Consensus with a substantially better time performance. The performance of FC has been assessed via extensive experiments on twelve benchmark datasets that summarize key features of microarray applications, such as cancer studies, gene expression with up and down patterns, and a full spectrum of dimensionality up to over a thousand. Based on their outcome, compared with previous benchmarking results available in the literature, FC turns out to be among the fastest internal validation methods, while retaining the same outstanding precision of Consensus. Moreover, it also provides a consensus matrix that can be used as a dissimilarity matrix, guaranteeing the same performance as the corresponding matrix produced by Consensus. We have also experimented with the use of Consensus and FC in conjunction with NMF (Nonnegative Matrix Factorization), in order to identify the correct number of clusters in a dataset. Although NMF is an increasingly popular technique for biological data mining, our results are somewhat disappointing and complement quite well the state of the art about NMF, shedding further light on its merits and limitations. Conclusions In summary, FC with a parameter setting that makes it robust with respect to small and medium-sized datasets, i.e, number of items to cluster in the hundreds and number of conditions up to a thousand, seems to be the internal validation measure of choice. Moreover, the technique we have developed here can be used in other contexts, in particular for the speed-up of stability-based validation measures. PMID:21235792
Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions
Liu, Weidong; Luo, Xi
2014-01-01
This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463
Psychometric properties of the Dutch version of the self-sufficiency matrix (SSM-D).
Fassaert, Thijs; Lauriks, Steve; van de Weerd, Stef; Theunissen, Jan; Kikkert, Martijn; Dekker, Jack; Buster, Marcel; de Wit, Matty
2014-07-01
Measuring treatment outcomes can be challenging in patients who experience multiple interlinked problems, as is the case in public mental health care (PMHC). This study describes the development and psychometric properties of a Dutch version of the self-sufficiency matrix (SSM-D), an instrument that measures outcomes and originates from the US. In two different settings, clients were rated using the SSM-D in combination with the Health of the Nation Outcome Scales (HoNOS) and the Camberwell assessment of need short appraisal schedule (CANSAS). The results provided support for adequate psychometric properties of the SSM-D. The SSM-D had a solid single factor structure and internal consistency of the scale was excellent. In addition, convergent validity of the SSM-D was indicated by strong correlations between HoNOS and CANSAS, as well as between several subdomains. Further research is needed to establish whether the results presented here can be obtained in other PMHC settings.
A Newton-Raphson Method Approach to Adjusting Multi-Source Solar Simulators
NASA Technical Reports Server (NTRS)
Snyder, David B.; Wolford, David S.
2012-01-01
NASA Glenn Research Center has been using an in house designed X25 based multi-source solar simulator since 2003. The simulator is set up for triple junction solar cells prior to measurements b y adjusting the three sources to produce the correct short circuit current, lsc, in each of three AM0 calibrated sub-cells. The past practice has been to adjust one source on one sub-cell at a time, iterating until all the sub-cells have the calibrated Isc. The new approach is to create a matrix of measured lsc for small source changes on each sub-cell. A matrix, A, is produced. This is normalized to unit changes in the sources so that Ax(delta)s = (delta)isc. This matrix can now be inverted and used with the known Isc differences from the AM0 calibrated values to indicate changes in the source settings, (delta)s = A ·'x.(delta)isc This approach is still an iterative one, but all sources are changed during each iteration step. It typically takes four to six steps to converge on the calibrated lsc values. Even though the source lamps may degrade over time, the initial matrix evaluation i s not performed each time, since measurement matrix needs to be only approximate. Because an iterative approach is used the method will still continue to be valid. This method may become more important as state-of-the-art solar cell junction responses overlap the sources of the simulator. Also, as the number of cell junctions and sources increase, this method should remain applicable.
Interactive Computer Graphics for System Analysis.
1983-12-01
seven matrices have a maximum size of 10xlO and are named: AMAT BMAT CMAT DMAT KMAT FMAT GMAT ICECAP-II has available four transfer functions which B-1...is: COPY (source) (destination) The valid source and destination variables are: AMAT CMAT PMAT GTF KMAT BMAT DMAT GMAT HTF OLTF CLTF Transfer functions...Function: gtf, htf, oltf, cltf Enter choice of Matrix: AMAT, BMAT , CMAT, DMAT, FMAT, GMAT, KMAT Enter SETUP in order to set up State Space Model Enter
Trabelsi, W; Franklin, H; Tinel, A
2016-05-01
The resonance spectrum of sets of two to five infinitely long parallel cylindrical glass inclusions in a fluid saturated porous matrix of unconsolidated glass beads is investigated. The ratio of bead diameters to inclusion diameters is 1/5. The far field form functions and the related phase derivatives are calculated by using an exact multiple scattering formalism and by assuming that the porous medium obeys Biot's model. In order to validate this hypothesis, comparisons between theory and experiments are done in the special case of a fast incident wave on a set of two and three inclusions.
Suhr, Anna Catharina; Vogeser, Michael; Grimm, Stefanie H
2016-05-30
For quotable quantitative analysis of endogenous analytes in complex biological samples by isotope dilution LC-MS/MS, the creation of appropriate calibrators is a challenge, since analyte-free authentic material is in general not available. Thus, surrogate matrices are often used to prepare calibrators and controls. However, currently employed validation protocols do not include specific experiments to verify the suitability of a surrogate matrix calibration for quantification of authentic matrix samples. The aim of the study was the development of a novel validation experiment to test whether surrogate matrix based calibrators enable correct quantification of authentic matrix samples. The key element of the novel validation experiment is the inversion of nonlabelled analytes and their stable isotope labelled (SIL) counterparts in respect to their functions, i.e. SIL compound is the analyte and nonlabelled substance is employed as internal standard. As a consequence, both surrogate and authentic matrix are analyte-free regarding SIL analytes, which allows a comparison of both matrices. We called this approach Isotope Inversion Experiment. As figure of merit we defined the accuracy of inverse quality controls in authentic matrix quantified by means of a surrogate matrix calibration curve. As a proof-of-concept application a LC-MS/MS assay addressing six corticosteroids (cortisol, cortisone, corticosterone, 11-deoxycortisol, 11-deoxycorticosterone, and 17-OH-progesterone) was chosen. The integration of the Isotope Inversion Experiment in the validation protocol for the steroid assay was successfully realized. The accuracy results of the inverse quality controls were all in all very satisfying. As a consequence the suitability of a surrogate matrix calibration for quantification of the targeted steroids in human serum as authentic matrix could be successfully demonstrated. The Isotope Inversion Experiment fills a gap in the validation process for LC-MS/MS assays quantifying endogenous analytes. We consider it a valuable and convenient tool to evaluate the correct quantification of authentic matrix samples based on a calibration curve in surrogate matrix. Copyright © 2016 Elsevier B.V. All rights reserved.
Thermo-mechanical evaluation of carbon-carbon primary structure for SSTO vehicles
NASA Astrophysics Data System (ADS)
Croop, Harold C.; Lowndes, Holland B.; Hahn, Steven E.; Barthel, Chris A.
1998-01-01
An advanced development program to demonstrate carbon-carbon composite structure for use as primary load carrying structure has entered the experimental validation phase. The component being evaluated is a wing torque box section for a single-stage-to-orbit (SSTO) vehicle. The validation or demonstration component features an advanced carbon-carbon design incorporating 3D woven graphite preforms, integral spars, oxidation inhibited matrix, chemical vapor deposited (CVD) oxidation protection coating, and ceramic matrix composite fasteners. The validation component represents the culmination of a four phase design and fabrication development effort. Extensive developmental testing was performed to verify material properties and integrity of basic design features before committing to fabrication of the full scale box. The wing box component is now being set up for testing in the Air Force Research Laboratory Structural Test Facility at Wright-Patterson Air Force Base, Ohio. One of the important developmental tests performed in support of the design and planned testing of the full scale box was the fabrication and test of a skin/spar trial subcomponent. The trial subcomponent incorporated critical features of the full scale wing box design. This paper discusses the results of the trial subcomponent test which served as a pathfinder for the upcoming full scale box test.
Novel image analysis methods for quantification of in situ 3-D tendon cell and matrix strain.
Fung, Ashley K; Paredes, J J; Andarawis-Puri, Nelly
2018-01-23
Macroscopic tendon loads modulate the cellular microenvironment leading to biological outcomes such as degeneration or repair. Previous studies have shown that damage accumulation and the phases of tendon healing are marked by significant changes in the extracellular matrix, but it remains unknown how mechanical forces of the extracellular matrix are translated to mechanotransduction pathways that ultimately drive the biological response. Our overarching hypothesis is that the unique relationship between extracellular matrix strain and cell deformation will dictate biological outcomes, prompting the need for quantitative methods to characterize the local strain environment. While 2-D methods have successfully calculated matrix strain and cell deformation, 3-D methods are necessary to capture the increased complexity that can arise due to high levels of anisotropy and out-of-plane motion, particularly in the disorganized, highly cellular, injured state. In this study, we validated the use of digital volume correlation methods to quantify 3-D matrix strain using images of naïve tendon cells, the collagen fiber matrix, and injured tendon cells. Additionally, naïve tendon cell images were used to develop novel methods for 3-D cell deformation and 3-D cell-matrix strain, which is defined as a quantitative measure of the relationship between matrix strain and cell deformation. The results support that these methods can be used to detect strains with high accuracy and can be further extended to an in vivo setting for observing temporal changes in cell and matrix mechanics during degeneration and healing. Copyright © 2017. Published by Elsevier Ltd.
van der Heijden, R T; Heijnen, J J; Hellinga, C; Romein, B; Luyben, K C
1994-01-05
Measurements provide the basis for process monitoring and control as well as for model development and validation. Systematic approaches to increase the accuracy and credibility of the empirical data set are therefore of great value. In (bio)chemical conversions, linear conservation relations such as the balance equations for charge, enthalpy, and/or chemical elements, can be employed to relate conversion rates. In a pactical situation, some of these rates will be measured (in effect, be calculated directly from primary measurements of, e.g., concentrations and flow rates), as others can or cannot be calculated from the measured ones. When certain measured rates can also be calculated from other measured rates, the set of equations, the accuracy and credibility of the measured rates can indeed be improved by, respectively, balancing and gross error diagnosis. The balanced conversion rates are more accurate, and form a consistent set of data, which is more suitable for further application (e.g., to calculate nonmeasured rates) than the raw measurements. Such an approach has drawn attention in previous studies. The current study deals mainly with the problem of mathematically classifying the conversion rates into balanceable and calculable rates, given the subset of measured rates. The significance of this problem is illustrated with some examples. It is shown that a simple matrix equation can be derived that contains the vector of measured conversion rates and the redundancy matrix R. Matrix R plays a predominant role in the classification problem. In supplementary articles, significance of the redundancy matrix R for an improved gross error diagnosis approach will be shown. In addition, efficient equations have been derived to calculate the balanceable and/or calculable rates. The method is completely based on matrix algebra (principally different from the graph-theoretical approach), and it is easily implemented into a computer program. (c) 1994 John Wiley & Sons, Inc.
An active learning representative subset selection method using net analyte signal.
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-05
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.
An active learning representative subset selection method using net analyte signal
NASA Astrophysics Data System (ADS)
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-01
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.
Hao, Z Q; Li, C M; Shen, M; Yang, X Y; Li, K H; Guo, L B; Li, X Y; Lu, Y F; Zeng, X Y
2015-03-23
Laser-induced breakdown spectroscopy (LIBS) with partial least squares regression (PLSR) has been applied to measuring the acidity of iron ore, which can be defined by the concentrations of oxides: CaO, MgO, Al₂O₃, and SiO₂. With the conventional internal standard calibration, it is difficult to establish the calibration curves of CaO, MgO, Al₂O₃, and SiO₂ in iron ore due to the serious matrix effects. PLSR is effective to address this problem due to its excellent performance in compensating the matrix effects. In this work, fifty samples were used to construct the PLSR calibration models for the above-mentioned oxides. These calibration models were validated by the 10-fold cross-validation method with the minimum root-mean-square errors (RMSE). Another ten samples were used as a test set. The acidities were calculated according to the estimated concentrations of CaO, MgO, Al₂O₃, and SiO₂ using the PLSR models. The average relative error (ARE) and RMSE of the acidity achieved 3.65% and 0.0048, respectively, for the test samples.
Dyrlund, Thomas F; Poulsen, Ebbe T; Scavenius, Carsten; Sanggaard, Kristian W; Enghild, Jan J
2012-09-01
Data processing and analysis of proteomics data are challenging and time consuming. In this paper, we present MS Data Miner (MDM) (http://sourceforge.net/p/msdataminer), a freely available web-based software solution aimed at minimizing the time required for the analysis, validation, data comparison, and presentation of data files generated in MS software, including Mascot (Matrix Science), Mascot Distiller (Matrix Science), and ProteinPilot (AB Sciex). The program was developed to significantly decrease the time required to process large proteomic data sets for publication. This open sourced system includes a spectra validation system and an automatic screenshot generation tool for Mascot-assigned spectra. In addition, a Gene Ontology term analysis function and a tool for generating comparative Excel data reports are included. We illustrate the benefits of MDM during a proteomics study comprised of more than 200 LC-MS/MS analyses recorded on an AB Sciex TripleTOF 5600, identifying more than 3000 unique proteins and 3.5 million peptides. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Medeiros Turra, Kely; Pineda Rivelli, Diogo; Berlanga de Moraes Barros, Silvia; Mesquita Pasqualoto, Kerly Fernanda
2016-07-01
A receptor-independent (RI) four-dimensional structure-activity relationship (4D-QSAR) formalism was applied to a set of sixty-four β-N-biaryl ether sulfonamide hydroxamate derivatives, previously reported as potent inhibitors against matrix metalloproteinase subtype 9 (MMP-9). MMP-9 belongs to a group of enzymes related to the cleavage of several extracellular matrix components and has been associated to cancer invasiveness/metastasis. The best RI 4D-QSAR model was statistically significant (N=47; r(2) =0.91; q(2) =0.83; LSE=0.09; LOF=0.35; outliers=0). Leave-N-out (LNO) and y-randomization approaches indicated the QSAR model was robust and presented no chance correlation, respectively. Furthermore, it also had good external predictability (82 %) regarding the test set (N=17). In addition, the grid cell occupancy descriptors (GCOD) of the predicted bioactive conformation for the most potent inhibitor were successfully interpreted when docked into the MMP-9 active site. The 3D-pharmacophore findings were used to predict novel ligands and exploit the MMP-9 calculated binding affinity through molecular docking procedure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Steinmann, I C; Pflüger, V; Schaffner, F; Mathis, A; Kaufmann, C
2013-03-01
Matrix-assisted laser desorption/ionization time of flight mass spectrometry (MALDI-TOF MS) was evaluated for the rapid identification of ceratopogonid larvae. Optimal sample preparation as evaluated with laboratory-reared biting midges Culicoides nubeculosus was the homogenization of gut-less larvae in 10% formic acid, and analysis of 0.2 mg/ml crude protein homogenate mixed with SA matrix at a ratio of 1:1.5. Using 5 larvae each of 4 ceratopogonid species (C. nubeculosus, C. obsoletus, C. decor, and Dasyhelea sp.) and of 2 culicid species (Aedes aegypti, Ae. japonicus), biomarker mass sets between 27 and 33 masses were determined. In a validation study, 67 larvae belonging to the target species were correctly identified by automated database-based identification (91%) or manual full comparison (9%). Four specimens of non-target species did not yield identification. As anticipated for holometabolous insects, the biomarker mass sets of adults cannot be used for the identification of larvae, and vice versa, because they share only very few similar masses as shown for C. nubeculosus, C. obsoletus, and Ae. japonicus. Thus, protein profiling by MALDI-TOF as a quick, inexpensive and accurate alternative tool is applicable to identify insect larvae of vector species collected in the field.
Rogosin, S.
2018-01-01
From the classic work of Gohberg & Krein (1958 Uspekhi Mat. Nauk. XIII, 3–72. (Russian).), it is well known that the set of partial indices of a non-singular matrix function may change depending on the properties of the original matrix. More precisely, it was shown that if the difference between the largest and the smallest partial indices is larger than unity then, in any neighbourhood of the original matrix function, there exists another matrix function possessing a different set of partial indices. As a result, the factorization of matrix functions, being an extremely difficult process itself even in the case of the canonical factorization, remains unresolvable or even questionable in the case of a non-stable set of partial indices. Such a situation, in turn, has became an unavoidable obstacle to the application of the factorization technique. This paper sets out to answer a less ambitious question than that of effective factorizing matrix functions with non-stable sets of partial indices, and instead focuses on determining the conditions which, when having known factorization of the limiting matrix function, allow to construct another family of matrix functions with the same origin that preserves the non-stable partial indices and is close to the original set of the matrix functions. PMID:29434502
Mishuris, G; Rogosin, S
2018-01-01
From the classic work of Gohberg & Krein (1958 Uspekhi Mat. Nauk. XIII , 3-72. (Russian).), it is well known that the set of partial indices of a non-singular matrix function may change depending on the properties of the original matrix. More precisely, it was shown that if the difference between the largest and the smallest partial indices is larger than unity then, in any neighbourhood of the original matrix function, there exists another matrix function possessing a different set of partial indices. As a result, the factorization of matrix functions, being an extremely difficult process itself even in the case of the canonical factorization, remains unresolvable or even questionable in the case of a non-stable set of partial indices. Such a situation, in turn, has became an unavoidable obstacle to the application of the factorization technique. This paper sets out to answer a less ambitious question than that of effective factorizing matrix functions with non-stable sets of partial indices, and instead focuses on determining the conditions which, when having known factorization of the limiting matrix function, allow to construct another family of matrix functions with the same origin that preserves the non-stable partial indices and is close to the original set of the matrix functions.
Shin, Sangmun; Choi, Du Hyung; Truong, Nguyen Khoa Viet; Kim, Nam Ah; Chu, Kyung Rok; Jeong, Seong Hoon
2011-04-04
A new experimental design methodology was developed by integrating the response surface methodology and the time series modeling. The major purposes were to identify significant factors in determining swelling and release rate from matrix tablets and their relative factor levels for optimizing the experimental responses. Properties of tablet swelling and drug release were assessed with ten factors and two default factors, a hydrophilic model drug (terazosin) and magnesium stearate, and compared with target values. The selected input control factors were arranged in a mixture simplex lattice design with 21 experimental runs. The obtained optimal settings for gelation were PEO, LH-11, Syloid, and Pharmacoat with weight ratios of 215.33 (88.50%), 5.68 (2.33%), 19.27 (7.92%), and 3.04 (1.25%), respectively. The optimal settings for drug release were PEO and citric acid with weight ratios of 191.99 (78.91%) and 51.32 (21.09%), respectively. Based on the results of matrix swelling and drug release, the optimal solutions, target values, and validation experiment results over time were similar and showed consistent patterns with very small biases. The experimental design methodology could be a very promising experimental design method to obtain maximum information with limited time and resources. It could also be very useful in formulation studies by providing a systematic and reliable screening method to characterize significant factors in the sustained release matrix tablet. Copyright © 2011 Elsevier B.V. All rights reserved.
Physician trust in the patient: development and validation of a new measure.
Thom, David H; Wong, Sabrina T; Guzman, David; Wu, Amery; Penko, Joanne; Miaskowski, Christine; Kushel, Margot
2011-01-01
Mutual trust is an important aspect of the patient-physician relationship with positive consequences for both parties. Previous measures have been limited to patient trust in the physician. We set out to develop and validate a measure of physician trust in the patient. We identified candidate items for the scale by content analysis of a previous qualitative study of patient-physician trust and developed and validated a scale among 61 primary care clinicians (50 physicians and 11 nonphysicians) with respect to 168 patients as part of a community-based study of prescription opioid use for chronic, nonmalignant pain in HIV-positive adults. Polychoric factor structure analysis using the Pratt D matrix was used to reduce the number of items and describe the factor structure. Construct validity was tested by comparing mean clinician trust scores for patients by clinician and patient behaviors expected to be associated with clinician trust using a generalized linear mixed model. The final 12-item scale had high internal reliability (Cronbach α =.93) and a distinct 2-factor pattern with the Pratt matrix D. Construct validity was demonstrated with respect to clinician-reported self-behaviors including toxicology screening (P <.001), and refusal to prescribe opioids (P <.001) and with patient behaviors including reporting opioids lost or stolen (P=.008), taking opioids to get high (P <.001), and selling opioids (P<.001). If validated in other populations, this measure of physician trust in the patient will be useful in investigating the antecedents and consequences of mutual trust, and the relationship between mutual trust and processes of care, which can help improve the delivery of clinical care.
Estimation and analysis of interannual variations in tropical oceanic rainfall using data from SSM/I
NASA Technical Reports Server (NTRS)
Berg, Wesley
1992-01-01
Rainfall over tropical ocean regions, particularly in the tropical Pacific, is estimated using Special Sensor Microwave/Imager (SSM/I) data. Instantaneous rainfall estimates are derived from brightness temperature values obtained from the satellite data using the Hughes D-Matrix algorithm. Comparisons with other satellite techniques are made to validate the SSM/I results for the tropical Pacific. The correlation coefficients are relatively high for the three data sets investigated, especially for the annual case.
Direct Demonstration of the Concept of Unrestricted Effective-Medium Approximation
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Zhanna M.; Zakharova, Nadezhda T.
2014-01-01
The modified unrestricted effective-medium refractive index is defined as one that yields accurate values of a representative set of far-field scattering characteristics (including the scattering matrix) for an object made of randomly heterogeneous materials. We validate the concept of the modified unrestricted effective-medium refractive index by comparing numerically exact superposition T-matrix results for a spherical host randomly filled with a large number of identical small inclusions and Lorenz-Mie results for a homogeneous spherical counterpart. A remarkable quantitative agreement between the superposition T-matrix and Lorenz-Mie scattering matrices over the entire range of scattering angles demonstrates unequivocally that the modified unrestricted effective-medium refractive index is a sound (albeit still phenomenological) concept provided that the size parameter of the inclusions is sufficiently small and their number is sufficiently large. Furthermore, it appears that in cases when the concept of the modified unrestricted effective-medium refractive index works, its actual value is close to that predicted by the Maxwell-Garnett mixing rule.
Tranel, Daniel; Manzel, Kenneth; Anderson, Steven W.
2008-01-01
Patients with prefrontal damage and severe defects in decision making and emotional regulation often have a remarkable absence of intellectual impairment, as measured by conventional IQ tests such as the WAIS/WAIS-R. This enigma might be explained by shortcomings in the tests, which tend to emphasize measures of “crystallized” (e.g., vocabulary, fund of information) more than “fluid” (e.g., novel problem solving) intelligence. The WAIS-III added the Matrix Reasoning subtest to enhance measurement of fluid reasoning. In a set of four studies, we investigated Matrix Reasoning performances in 80 patients with damage to various sectors of the prefrontal cortex, and contrasted these with the performances of 80 demographically matched patients with damage outside the frontal lobes. The results failed to support the hypothesis that prefrontal damage would disproportionately impair fluid intelligence, and every prefrontal subgroup we studied (dorsolateral, ventromedial, dorsolateral + ventromedial) had Matrix Reasoning scores (as well as IQ scores more generally) that were indistinguishable from those of the brain-damaged comparison groups. Our findings do not support a connection between fluid intelligence and the frontal lobes, although a viable alternative interpretation is that the Matrix Reasoning subtest lacks construct validity as a measure of fluid intelligence. PMID:17853146
Wave vector modification of the infinite order sudden approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sachs, J.G.; Bowman, J.M.
1980-10-15
A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories ismore » run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities P/sub n/1..-->..nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when ..delta..n=such thatub f/-n/sub i/ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.« less
Wave vector modification of the infinite order sudden approximation
NASA Astrophysics Data System (ADS)
Sachs, Judith Grobe; Bowman, Joel M.
1980-10-01
A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories is run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities Pn1→nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when Δn=‖nf-ni‖ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.
NASA Astrophysics Data System (ADS)
Decraene, Carolina; Dijckmans, Arne; Reynders, Edwin P. B.
2018-05-01
A method is developed for computing the mean and variance of the diffuse field sound transmission loss of finite-sized layered wall and floor systems that consist of solid, fluid and/or poroelastic layers. This is achieved by coupling a transfer matrix model of the wall or floor to statistical energy analysis subsystem models of the adjacent room volumes. The modal behavior of the wall is approximately accounted for by projecting the wall displacement onto a set of sinusoidal lateral basis functions. This hybrid modal transfer matrix-statistical energy analysis method is validated on multiple wall systems: a thin steel plate, a polymethyl methacrylate panel, a thick brick wall, a sandwich panel, a double-leaf wall with poro-elastic material in the cavity, and a double glazing. The predictions are compared with experimental data and with results obtained using alternative prediction methods such as the transfer matrix method with spatial windowing, the hybrid wave based-transfer matrix method, and the hybrid finite element-statistical energy analysis method. These comparisons confirm the prediction accuracy of the proposed method and the computational efficiency against the conventional hybrid finite element-statistical energy analysis method.
Ruuska, Salla; Hämäläinen, Wilhelmiina; Kajava, Sari; Mughal, Mikaela; Matilainen, Pekka; Mononen, Jaakko
2018-03-01
The aim of the present study was to evaluate empirically confusion matrices in device validation. We compared the confusion matrix method to linear regression and error indices in the validation of a device measuring feeding behaviour of dairy cattle. In addition, we studied how to extract additional information on classification errors with confusion probabilities. The data consisted of 12 h behaviour measurements from five dairy cows; feeding and other behaviour were detected simultaneously with a device and from video recordings. The resulting 216 000 pairs of classifications were used to construct confusion matrices and calculate performance measures. In addition, hourly durations of each behaviour were calculated and the accuracy of measurements was evaluated with linear regression and error indices. All three validation methods agreed when the behaviour was detected very accurately or inaccurately. Otherwise, in the intermediate cases, the confusion matrix method and error indices produced relatively concordant results, but the linear regression method often disagreed with them. Our study supports the use of confusion matrix analysis in validation since it is robust to any data distribution and type of relationship, it makes a stringent evaluation of validity, and it offers extra information on the type and sources of errors. Copyright © 2018 Elsevier B.V. All rights reserved.
Masuda, Y; Misztal, I; Tsuruta, S; Legarra, A; Aguilar, I; Lourenco, D A L; Fragomeni, B O; Lawlor, T J
2016-03-01
The objectives of this study were to develop and evaluate an efficient implementation in the computation of the inverse of genomic relationship matrix with the recursion algorithm, called the algorithm for proven and young (APY), in single-step genomic BLUP. We validated genomic predictions for young bulls with more than 500,000 genotyped animals in final score for US Holsteins. Phenotypic data included 11,626,576 final scores on 7,093,380 US Holstein cows, and genotypes were available for 569,404 animals. Daughter deviations for young bulls with no classified daughters in 2009, but at least 30 classified daughters in 2014 were computed using all the phenotypic data. Genomic predictions for the same bulls were calculated with single-step genomic BLUP using phenotypes up to 2009. We calculated the inverse of the genomic relationship matrix GAPY(-1) based on a direct inversion of genomic relationship matrix on a small subset of genotyped animals (core animals) and extended that information to noncore animals by recursion. We tested several sets of core animals including 9,406 bulls with at least 1 classified daughter, 9,406 bulls and 1,052 classified dams of bulls, 9,406 bulls and 7,422 classified cows, and random samples of 5,000 to 30,000 animals. Validation reliability was assessed by the coefficient of determination from regression of daughter deviation on genomic predictions for the predicted young bulls. The reliabilities were 0.39 with 5,000 randomly chosen core animals, 0.45 with the 9,406 bulls, and 7,422 cows as core animals, and 0.44 with the remaining sets. With phenotypes truncated in 2009 and the preconditioned conjugate gradient to solve mixed model equations, the number of rounds to convergence for core animals defined by bulls was 1,343; defined by bulls and cows, 2,066; and defined by 10,000 random animals, at most 1,629. With complete phenotype data, the number of rounds decreased to 858, 1,299, and at most 1,092, respectively. Setting up GAPY(-1) for 569,404 genotyped animals with 10,000 core animals took 1.3h and 57 GB of memory. The validation reliability with APY reaches a plateau when the number of core animals is at least 10,000. Predictions with APY have little differences in reliability among definitions of core animals. Single-step genomic BLUP with APY is applicable to millions of genotyped animals. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Sottos, Nancy R. (Inventor); Keller, Michael W. (Inventor); White, Scott R. (Inventor)
2009-01-01
A composite material includes an elastomer matrix, a set of first capsules containing a polymerizer, and a set of second capsules containing a corresponding activator for the polymerizer. The polymerizer may be a polymerizer for an elastomer. The composite material may be prepared by combining a first set of capsules containing a polymerizer, a second set of capsules containing a corresponding activator for the polymerizer, and a matrix precursor, and then solidifying the matrix precursor to form an elastomeric matrix.
Grande-Martínez, Ángel; Arrebola, Francisco Javier; Moreno, Laura Díaz; Vidal, José Luis Martínez; Frenich, Antonia Garrido
2015-01-01
A rapid and sensitive multiresidue method was developed and validated for the determination of around 100 pesticides in dry samples (rice and wheat flour) by ultra-performance LC coupled to a triple quadrupole mass analyzer working in tandem mode (UPLC/QqQ-MS/MS). The sample preparation step was optimized for both matrixes. Pesticides were extracted from rice samples using aqueous ethyl acetate, while aqueous acetonitrile extraction [modified QuEChERS (quick, easy, cheap, effective, rugged, and safe) method] was used for wheat flour matrixes. In both cases the extracts were then cleaned up by dispersive solid phase extraction with MgSO4 and primary secondary amine+C18 sorbents. A further cleanup step with Florisil was necessary to remove fat in wheat flour. The method was validated at two concentration levels (3.6 and 40 μg/kg for most compounds), obtaining recoveries ranging from 70 to 120%, intraday and interday precision values≤20% expressed as RSDs, and expanded uncertainty values≤50%. The LOQ values ranged between 3.6 and 20 μg/kg, although it was set at 3.6 μg/kg for the majority of the pesticides. The method was applied to the analysis of 20 real samples, and no pesticides were detected.
Budget Online Learning Algorithm for Least Squares SVM.
Jian, Ling; Shen, Shuqian; Li, Jundong; Liang, Xijun; Li, Lei
2017-09-01
Batch-mode least squares support vector machine (LSSVM) is often associated with unbounded number of support vectors (SVs'), making it unsuitable for applications involving large-scale streaming data. Limited-scale LSSVM, which allows efficient updating, seems to be a good solution to tackle this issue. In this paper, to train the limited-scale LSSVM dynamically, we present a budget online LSSVM (BOLSSVM) algorithm. Methodologically, by setting a fixed budget for SVs', we are able to update the LSSVM model according to the updated SVs' set dynamically without retraining from scratch. In particular, when a new small chunk of SVs' substitute for the old ones, the proposed algorithm employs a low rank correction technology and the Sherman-Morrison-Woodbury formula to compute the inverse of saddle point matrix derived from the LSSVM's Karush-Kuhn-Tucker (KKT) system, which, in turn, updates the LSSVM model efficiently. In this way, the proposed BOLSSVM algorithm is especially useful for online prediction tasks. Another merit of the proposed BOLSSVM is that it can be used for k -fold cross validation. Specifically, compared with batch-mode learning methods, the computational complexity of the proposed BOLSSVM method is significantly reduced from O(n 4 ) to O(n 3 ) for leave-one-out cross validation with n training samples. The experimental results of classification and regression on benchmark data sets and real-world applications show the validity and effectiveness of the proposed BOLSSVM algorithm.
Muehlwald, S; Buchner, N; Kroh, L W
2018-03-23
Because of the high number of possible pesticide residues and their chemical complexity, it is necessary to develop methods which cover a broad range of pesticides. In this work, a qualitative multi-screening method for pesticides was developed by use of HPLC-ESI-Q-TOF. 110 pesticides were chosen for the creation of a personal compound database and library (PCDL). The MassHunter Qualitative Analysis software from Agilent Technologies was used to identify the analytes. The software parameter settings were optimised to produce a low number of false positive as well as false negative results. The method was validated for 78 selected pesticides. However, the validation criteria were not fulfilled for 45 analytes. Due to this result, investigations were started to elucidate reasons for the low detectability. It could be demonstrated that the three main causes of the signal suppression were the co-eluting matrix (matrix effect), the low sensitivity of the analyte in standard solution and the fragmentation of the analyte in the ion source (in-source collision-induced dissociation). In this paper different examples are discussed showing that the impact of these three causes is different for each analyte. For example, it is possible that an analyte with low signal intensity and an intense fragmentation in the ion source is detectable in a difficult matrix, whereas an analyte with a high sensitivity and a low fragmentation is not detectable in a simple matrix. Additionally, it could be shown that in-source fragments are a helpful tool for an unambiguous identification. Copyright © 2018 Elsevier B.V. All rights reserved.
Zhou, Yan; Cao, Hui
2013-01-01
We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
Jiang, Jian; James, Christopher A; Wong, Philip
2016-09-05
A LC-MS/MS method has been developed and validated for the determination of glycine in human cerebrospinal fluid (CSF). The validated method used artificial cerebrospinal fluid as a surrogate matrix for calibration standards. The calibration curve range for the assay was 100-10,000ng/mL and (13)C2, (15)N-glycine was used as an internal standard (IS). Pre-validation experiments were performed to demonstrate parallelism with surrogate matrix and standard addition methods. The mean endogenous glycine concentration in a pooled human CSF determined on three days by using artificial CSF as a surrogate matrix and the method of standard addition was found to be 748±30.6 and 768±18.1ng/mL, respectively. A percentage difference of -2.6% indicated that artificial CSF could be used as a surrogate calibration matrix for the determination of glycine in human CSF. Quality control (QC) samples, except the lower limit of quantitation (LLOQ) QC and low QC samples, were prepared by spiking glycine into aliquots of pooled human CSF sample. The low QC sample was prepared from a separate pooled human CSF sample containing low endogenous glycine concentrations, while the LLOQ QC sample was prepared in artificial CSF. Standard addition was used extensively to evaluate matrix effects during validation. The validated method was used to determine the endogenous glycine concentrations in human CSF samples. Incurred sample reanalysis demonstrated reproducibility of the method. Copyright © 2016 Elsevier B.V. All rights reserved.
Goicoechea, H C; Olivieri, A C
2001-07-01
A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.
Update of Standard Practices for New Method Validation in Forensic Toxicology.
Wille, Sarah M R; Coucke, Wim; De Baere, Thierry; Peters, Frank T
2017-01-01
International agreement concerning validation guidelines is important to obtain quality forensic bioanalytical research and routine applications as it all starts with the reporting of reliable analytical data. Standards for fundamental validation parameters are provided in guidelines as those from the US Food and Drug Administration (FDA), the European Medicines Agency (EMA), the German speaking Gesellschaft fur Toxikologie und Forensische Chemie (GTFCH) and the Scientific Working Group of Forensic Toxicology (SWGTOX). These validation parameters include selectivity, matrix effects, method limits, calibration, accuracy and stability, as well as other parameters such as carryover, dilution integrity and incurred sample reanalysis. It is, however, not easy for laboratories to implement these guidelines into practice as these international guidelines remain nonbinding protocols, that depend on the applied analytical technique, and that need to be updated according the analyst's method requirements and the application type. In this manuscript, a review of the current guidelines and literature concerning bioanalytical validation parameters in a forensic context is given and discussed. In addition, suggestions for the experimental set-up, the pros and cons of statistical approaches and adequate acceptance criteria for the validation of bioanalytical applications are given. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Astrophysics Data System (ADS)
Zhang, Jie; Nixon, Andrew; Barber, Tom; Budyn, Nicolas; Bevan, Rhodri; Croxford, Anthony; Wilcox, Paul
2018-04-01
In this paper, a methodology of using finite element (FE) model to validate a ray-based model in the simulation of full matrix capture (FMC) ultrasonic array data set is proposed. The overall aim is to separate signal contributions from different interactions in FE results for easier comparing each individual component in the ray-based model results. This is achieved by combining the results from multiple FE models of the system of interest that include progressively more geometrical features while preserving the same mesh structure. It is shown that the proposed techniques allow the interactions from a large number of different ray-paths to be isolated in FE results and compared directly to the results from a ray-based forward model.
Discriminant Validity of the WISC-IV Culture-Language Interpretive Matrix
ERIC Educational Resources Information Center
Styck, Kara M.; Watkins, Marley W.
2014-01-01
The Culture-Language Interpretive Matrix (C-LIM) was developed to help practitioners determine the validity of test scores obtained from students who are culturally and linguistically different from the normative group of a test. The present study used an idiographic approach to investigate the diagnostic utility of the C-LIM for the Wechsler…
NASA Technical Reports Server (NTRS)
Kapoor, Manju M.; Mehta, Manju
2010-01-01
The goal of this paper is to emphasize the importance of developing complete and unambiguous requirements early in the project cycle (prior to Preliminary Design Phase). Having a complete set of requirements early in the project cycle allows sufficient time to generate a traceability matrix. Requirements traceability and analysis are the key elements in improving verification and validation process, and thus overall software quality. Traceability can be most beneficial when the system changes. If changes are made to high-level requirements it implies that low-level requirements need to be modified. Traceability ensures that requirements are appropriately and efficiently verified at various levels whereas analysis ensures that a rightly interpreted set of requirements is produced.
A projection operator method for the analysis of magnetic neutron form factors
NASA Astrophysics Data System (ADS)
Kaprzyk, S.; Van Laar, B.; Maniawski, F.
1981-03-01
A set of projection operators in matrix form has been derived on the basis of decomposition of the spin density into a series of fully symmetrized cubic harmonics. This set of projection operators allows a formulation of the Fourier analysis of magnetic form factors in a convenient way. The presented method is capable of checking the validity of various theoretical models used for spin density analysis up to now. The general formalism is worked out in explicit form for the fcc and bcc structures and deals with that part of spin density which is contained within the sphere inscribed in the Wigner-Seitz cell. This projection operator method has been tested on the magnetic form factors of nickel and iron.
Galaxy two-point covariance matrix estimation for next generation surveys
NASA Astrophysics Data System (ADS)
Howlett, Cullan; Percival, Will J.
2017-12-01
We perform a detailed analysis of the covariance matrix of the spherically averaged galaxy power spectrum and present a new, practical method for estimating this within an arbitrary survey without the need for running mock galaxy simulations that cover the full survey volume. The method uses theoretical arguments to modify the covariance matrix measured from a set of small-volume cubic galaxy simulations, which are computationally cheap to produce compared to larger simulations and match the measured small-scale galaxy clustering more accurately than is possible using theoretical modelling. We include prescriptions to analytically account for the window function of the survey, which convolves the measured covariance matrix in a non-trivial way. We also present a new method to include the effects of super-sample covariance and modes outside the small simulation volume which requires no additional simulations and still allows us to scale the covariance matrix. As validation, we compare the covariance matrix estimated using our new method to that from a brute-force calculation using 500 simulations originally created for analysis of the Sloan Digital Sky Survey Main Galaxy Sample. We find excellent agreement on all scales of interest for large-scale structure analysis, including those dominated by the effects of the survey window, and on scales where theoretical models of the clustering normally break down, but the new method produces a covariance matrix with significantly better signal-to-noise ratio. Although only formally correct in real space, we also discuss how our method can be extended to incorporate the effects of redshift space distortions.
Population clustering based on copy number variations detected from next generation sequencing data.
Duan, Junbo; Zhang, Ji-Gang; Wan, Mingxi; Deng, Hong-Wen; Wang, Yu-Ping
2014-08-01
Copy number variations (CNVs) can be used as significant bio-markers and next generation sequencing (NGS) provides a high resolution detection of these CNVs. But how to extract features from CNVs and further apply them to genomic studies such as population clustering have become a big challenge. In this paper, we propose a novel method for population clustering based on CNVs from NGS. First, CNVs are extracted from each sample to form a feature matrix. Then, this feature matrix is decomposed into the source matrix and weight matrix with non-negative matrix factorization (NMF). The source matrix consists of common CNVs that are shared by all the samples from the same group, and the weight matrix indicates the corresponding level of CNVs from each sample. Therefore, using NMF of CNVs one can differentiate samples from different ethnic groups, i.e. population clustering. To validate the approach, we applied it to the analysis of both simulation data and two real data set from the 1000 Genomes Project. The results on simulation data demonstrate that the proposed method can recover the true common CNVs with high quality. The results on the first real data analysis show that the proposed method can cluster two family trio with different ancestries into two ethnic groups and the results on the second real data analysis show that the proposed method can be applied to the whole-genome with large sample size consisting of multiple groups. Both results demonstrate the potential of the proposed method for population clustering.
Development of a clinical diagnostic matrix for characterizing inherited epidermolysis bullosa.
Yenamandra, V K; Moss, C; Sreenivas, V; Khan, M; Sivasubbu, S; Sharma, V K; Sethuraman, G
2017-06-01
Accurately diagnosing the subtype of epidermolysis bullosa (EB) is critical for management and genetic counselling. Modern laboratory techniques are largely inaccessible in developing countries, where the diagnosis remains clinical and often inaccurate. To develop a simple clinical diagnostic tool to aid in the diagnosis and subtyping of EB. We developed a matrix indicating presence or absence of a set of distinctive clinical features (as rows) for the nine most prevalent EB subtypes (as columns). To test an individual patient, presence or absence of these features was compared with the findings expected in each of the nine subtypes to see which corresponded best. If two or more diagnoses scored equally, the diagnosis with the greatest number of specific features was selected. The matrix was tested using findings from 74 genetically characterized patients with EB aged > 6 months by an investigator blinded to molecular diagnosis. For concordance, matrix diagnoses were compared with molecular diagnoses. Overall, concordance between the matrix and molecular diagnoses for the four major types of EB was 91·9%, with a kappa coefficient of 0·88 [95% confidence interval (CI) 0·81-0·95; P < 0·001]. The matrix achieved a 75·7% agreement in classifying EB into its nine subtypes, with a kappa coefficient of 0·73 (95% CI 0·69-0·77; P < 0·001). The matrix appears to be simple, valid and useful in predicting the type and subtype of EB. An electronic version will facilitate further testing. © 2016 British Association of Dermatologists.
Gene selection heuristic algorithm for nutrigenomics studies.
Valour, D; Hue, I; Grimard, B; Valour, B
2013-07-15
Large datasets from -omics studies need to be deeply investigated. The aim of this paper is to provide a new method (LEM method) for the search of transcriptome and metabolome connections. The heuristic algorithm here described extends the classical canonical correlation analysis (CCA) to a high number of variables (without regularization) and combines well-conditioning and fast-computing in "R." Reduced CCA models are summarized in PageRank matrices, the product of which gives a stochastic matrix that resumes the self-avoiding walk covered by the algorithm. Then, a homogeneous Markov process applied to this stochastic matrix converges the probabilities of interconnection between genes, providing a selection of disjointed subsets of genes. This is an alternative to regularized generalized CCA for the determination of blocks within the structure matrix. Each gene subset is thus linked to the whole metabolic or clinical dataset that represents the biological phenotype of interest. Moreover, this selection process reaches the aim of biologists who often need small sets of genes for further validation or extended phenotyping. The algorithm is shown to work efficiently on three published datasets, resulting in meaningfully broadened gene networks.
NASA Technical Reports Server (NTRS)
Swift, C. T.; Goodberlet, M. A.; Wilkerson, J. C.
1990-01-01
The Defence Meteorological Space Program's (DMSP) Special Sensor Microwave/Imager (SSM/I), an operational wind speed algorithm was developed. The algorithm is based on the D-matrix approach which seeks a linear relationship between measured SSM/I brightness temperatures and environmental parameters. D-matrix performance was validated by comparing algorithm derived wind speeds with near-simultaneous and co-located measurements made by off-shore ocean buoys. Other topics include error budget modeling, alternate wind speed algorithms, and D-matrix performance with one or more inoperative SSM/I channels.
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.
2016-06-01
Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.
Investigating the incremental validity of cognitive variables in early mathematics screening.
Clarke, Ben; Shanley, Lina; Kosty, Derek; Baker, Scott K; Cary, Mari Strand; Fien, Hank; Smolkowski, Keith
2018-03-26
The purpose of this study was to investigate the incremental validity of a set of domain general cognitive measures added to a traditional screening battery of early numeracy measures. The sample consisted of 458 kindergarten students of whom 285 were designated as severely at-risk for mathematics difficulty. Hierarchical multiple regression results indicated that Wechsler Abbreviated Scales of Intelligence (WASI) Matrix Reasoning and Vocabulary subtests, and Digit Span Forward and Backward measures explained a small, but unique portion of the variance in kindergarten students' mathematics performance on the Test of Early Mathematics Ability-Third Edition (TEMA-3) when controlling for Early Numeracy Curriculum Based Measurement (EN-CBM) screening measures (R² change = .01). Furthermore, the incremental validity of the domain general cognitive measures was relatively stronger for the severely at-risk sample. We discuss results from the study in light of instructional decision-making and note the findings do not justify adding domain general cognitive assessments to mathematics screening batteries. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Semiotic indexing of digital resources
Parker, Charles T; Garrity, George M
2014-12-02
A method of classifying a plurality of documents. The method includes steps of providing a first set of classification terms and a second set of classification terms, the second set of classification terms being different from the first set of classification terms; generating a first frequency array of a number of occurrences of each term from the first set of classification terms in each document; generating a second frequency array of a number of occurrences of each term from the second set of classification terms in each document; generating a first similarity matrix from the first frequency array; generating a second similarity matrix from the second frequency array; determining an entrywise combination of the first similarity matrix and the second similarity matrix; and clustering the plurality of documents based on the result of the entrywise combination.
Kriegsmann, Mark; Casadonte, Rita; Kriegsmann, Jörg; Dienemann, Hendrik; Schirmacher, Peter; Hendrik Kobarg, Jan; Schwamborn, Kristina; Stenzinger, Albrecht; Warth, Arne; Weichert, Wilko
2016-01-01
Histopathological subtyping of non-small cell lung cancer (NSCLC) into adenocarcinoma (ADC), and squamous cell carcinoma (SqCC) is of utmost relevance for treatment stratification. However, current immunohistochemistry (IHC) based typing approaches on biopsies are imperfect, therefore novel analytical methods for reliable subtyping are needed. We analyzed formalin-fixed paraffin-embedded tissue cores of NSCLC by Matrix-assisted laser desorption/ionization (MALDI) imaging on tissue microarrays to identify and validate discriminating MALDI imaging profiles for NSCLC subtyping. 110 ADC and 98 SqCC were used to train a Linear Discriminant Analysis (LDA) model. Results were validated on a separate set of 58 ADC and 60 SqCC. Selected differentially expressed proteins were identified by tandem mass spectrometry and validated by IHC. The LDA classification model incorporated 339 m/z values. In the validation cohort, in 117 cases (99.1%) MALDI classification on tissue cores was in accordance with the pathological diagnosis made on resection specimen. Overall, three cases in the combined cohorts were discordant, after reevaluation two were initially misclassified by pathology whereas one was classified incorrectly by MALDI. Identification of differentially expressed peptides detected well-known IHC discriminators (CK5, CK7), but also less well known differentially expressed proteins (CK15, HSP27). In conclusion, MALDI imaging on NSCLC tissue cores as small biopsy equivalents is capable to discriminate lung ADC and SqCC with a very high accuracy. In addition, replacing multislide IHC by an one-slide MALDI approach may also save tissue for subsequent predictive molecular testing. We therefore advocate to pursue routine diagnostic implementation strategies for MALDI imaging in solid tumor typing. PMID:27473201
Kriegsmann, Mark; Casadonte, Rita; Kriegsmann, Jörg; Dienemann, Hendrik; Schirmacher, Peter; Hendrik Kobarg, Jan; Schwamborn, Kristina; Stenzinger, Albrecht; Warth, Arne; Weichert, Wilko
2016-10-01
Histopathological subtyping of non-small cell lung cancer (NSCLC) into adenocarcinoma (ADC), and squamous cell carcinoma (SqCC) is of utmost relevance for treatment stratification. However, current immunohistochemistry (IHC) based typing approaches on biopsies are imperfect, therefore novel analytical methods for reliable subtyping are needed. We analyzed formalin-fixed paraffin-embedded tissue cores of NSCLC by Matrix-assisted laser desorption/ionization (MALDI) imaging on tissue microarrays to identify and validate discriminating MALDI imaging profiles for NSCLC subtyping. 110 ADC and 98 SqCC were used to train a Linear Discriminant Analysis (LDA) model. Results were validated on a separate set of 58 ADC and 60 SqCC. Selected differentially expressed proteins were identified by tandem mass spectrometry and validated by IHC. The LDA classification model incorporated 339 m/z values. In the validation cohort, in 117 cases (99.1%) MALDI classification on tissue cores was in accordance with the pathological diagnosis made on resection specimen. Overall, three cases in the combined cohorts were discordant, after reevaluation two were initially misclassified by pathology whereas one was classified incorrectly by MALDI. Identification of differentially expressed peptides detected well-known IHC discriminators (CK5, CK7), but also less well known differentially expressed proteins (CK15, HSP27). In conclusion, MALDI imaging on NSCLC tissue cores as small biopsy equivalents is capable to discriminate lung ADC and SqCC with a very high accuracy. In addition, replacing multislide IHC by an one-slide MALDI approach may also save tissue for subsequent predictive molecular testing. We therefore advocate to pursue routine diagnostic implementation strategies for MALDI imaging in solid tumor typing. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
PACIC Instrument: disentangling dimensions using published validation models.
Iglesias, K; Burnand, B; Peytremann-Bridevaux, I
2014-06-01
To better understand the structure of the Patient Assessment of Chronic Illness Care (PACIC) instrument. More specifically to test all published validation models, using one single data set and appropriate statistical tools. Validation study using data from cross-sectional survey. A population-based sample of non-institutionalized adults with diabetes residing in Switzerland (canton of Vaud). French version of the 20-items PACIC instrument (5-point response scale). We conducted validation analyses using confirmatory factor analysis (CFA). The original five-dimension model and other published models were tested with three types of CFA: based on (i) a Pearson estimator of variance-covariance matrix, (ii) a polychoric correlation matrix and (iii) a likelihood estimation with a multinomial distribution for the manifest variables. All models were assessed using loadings and goodness-of-fit measures. The analytical sample included 406 patients. Mean age was 64.4 years and 59% were men. Median of item responses varied between 1 and 4 (range 1-5), and range of missing values was between 5.7 and 12.3%. Strong floor and ceiling effects were present. Even though loadings of the tested models were relatively high, the only model showing acceptable fit was the 11-item single-dimension model. PACIC was associated with the expected variables of the field. Our results showed that the model considering 11 items in a single dimension exhibited the best fit for our data. A single score, in complement to the consideration of single-item results, might be used instead of the five dimensions usually described. © The Author 2014. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
Choi, Hyun Duck; Ahn, Choon Ki; Karimi, Hamid Reza; Lim, Myo Taeg
2017-10-01
This paper studies delay-dependent exponential dissipative and l 2 - l ∞ filtering problems for discrete-time switched neural networks (DSNNs) including time-delayed states. By introducing a novel discrete-time inequality, which is a discrete-time version of the continuous-time Wirtinger-type inequality, we establish new sets of linear matrix inequality (LMI) criteria such that discrete-time filtering error systems are exponentially stable with guaranteed performances in the exponential dissipative and l 2 - l ∞ senses. The design of the desired exponential dissipative and l 2 - l ∞ filters for DSNNs can be achieved by solving the proposed sets of LMI conditions. Via numerical simulation results, we show the validity of the desired discrete-time filter design approach.
Assessing Fit of Item Response Models Using the Information Matrix Test
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jorg-Tobias
2012-01-01
The information matrix can equivalently be determined via the expectation of the Hessian matrix or the expectation of the outer product of the score vector. The identity of these two matrices, however, is only valid in case of a correctly specified model. Therefore, differences between the two versions of the observed information matrix indicate…
Managing design excellence tools during the development of new orthopaedic implants.
Défossez, Henri J P; Serhan, Hassan
2013-11-01
Design excellence (DEX) tools have been widely used for years in some industries for their potential to facilitate new product development. The medical sector, targeted by cost pressures, has therefore started adopting them. Numerous tools are available; however only appropriate deployment during the new product development stages can optimize the overall process. The primary study objectives were to describe generic tools and illustrate their implementation and management during the development of new orthopaedic implants, and compile a reference package. Secondary objectives were to present the DEX tool investment costs and savings, since the method can require significant resources for which companies must carefully plan. The publicly available DEX method "Define Measure Analyze Design Verify Validate" was adopted and implemented during the development of a new spinal implant. Several tools proved most successful at developing the correct product, addressing clinical needs, and increasing market penetration potential, while reducing design iterations and manufacturing validations. Cost analysis and Pugh Matrix coupled with multi generation planning enabled developing a strong rationale to activate the project, set the vision and goals. improved risk management and product map established a robust technical verification-validation program. Design of experiments and process quantification facilitated design for manufacturing of critical features, as early as the concept phase. Biomechanical testing with analysis of variance provided a validation model with a recognized statistical performance baseline. Within those tools, only certain ones required minimum resources (i.e., business case, multi generational plan, project value proposition, Pugh Matrix, critical To quality process validation techniques), while others required significant investments (i.e., voice of customer, product usage map, improved risk management, design of experiments, biomechanical testing techniques). All used techniques provided savings exceeding investment costs. Some other tools were considered and found less relevant. A matrix summarized the investment costs and generated estimated savings. Globally, all companies can benefit from using DEX by smartly selecting and estimating those tools with best return on investment at the start of the project. For this, a good understanding of the available company resources, background and development strategy are needed. In conclusion, it was possible to illustrate that appropriate management of design excellence tools can greatly facilitate the development of new orthopaedic implant systems.
Front-line ordering clinicians: matching workforce to workload.
Fieldston, Evan S; Zaoutis, Lisa B; Hicks, Patricia J; Kolb, Susan; Sladek, Erin; Geiger, Debra; Agosto, Paula M; Boswinkel, Jan P; Bell, Louis M
2014-07-01
Matching workforce to workload is particularly important in healthcare delivery, where an excess of workload for the available workforce may negatively impact processes and outcomes of patient care and resident learning. Hospitals currently lack a means to measure and match dynamic workload and workforce factors. This article describes our work to develop and obtain consensus for use of an objective tool to dynamically match the front-line ordering clinician (FLOC) workforce to clinical workload in a variety of inpatient settings. We undertook development of a tool to represent hospital workload and workforce based on literature reviews, discussions with clinical leadership, and repeated validation sessions. We met with physicians and nurses from every clinical care area of our large, urban children's hospital at least twice. We successfully created a tool in a matrix format that is objective and flexible and can be applied to a variety of settings. We presented the tool in 14 hospital divisions and received widespread acceptance among physician, nursing, and administrative leadership. The hospital uses the tool to identify gaps in FLOC coverage and guide staffing decisions. Hospitals can better match workload to workforce if they can define and measure these elements. The Care Model Matrix is a flexible, objective tool that quantifies the multidimensional aspects of workload and workforce. The tool, which uses multiple variables that are easily modifiable, can be adapted to a variety of settings. © 2014 Society of Hospital Medicine.
NASA Astrophysics Data System (ADS)
Sun, Yang; Liao, Kuo-Chih; Sun, Yinghua; Park, Jesung; Marcu, Laura
2008-02-01
A unique tissue phantom is reported here that mimics the optical and acoustical properties of biological tissue and enables testing and validation of a dual-modality clinical diagnostic system combining time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) and ultrasound backscatter microscopy (UBM). The phantom consisted of contrast agents including silicon dioxide particles with a range of diameters from 0.5 to 10 μm acting as optical and acoustical scatterers, and FITC-conjugated dextran mimicking the endogenous fluorophore in tissue. The agents were encapsulated in a polymer bead attached to the end of an optical fiber with a 200 μm diameter using a UV-induced polymerization technique. A set of beads with fibers were then implanted into a gel-based matrix with controlled patterns including a design with lateral distribution and a design with successively changing depth. The configuration presented here allowed the validation of the hybrid fluorescence spectroscopic and ultrasonic system by detecting the lateral and depth distribution of the contrast agents, as well as for coregistration of the ultrasonic image with spectroscopic data. In addition, the depth of the beads in the gel matrix was changed to explore the effect of different concentration ratio of the mixture on the fluorescence signal emitted.
Muscle synergies during bench press are reliable across days.
Kristiansen, Mathias; Samani, Afshin; Madeleine, Pascal; Hansen, Ernst Albin
2016-10-01
Muscle synergies have been investigated during different types of human movement using nonnegative matrix factorization. However, there are not any reports available on the reliability of the method. To evaluate between-day reliability, 21 subjects performed bench press, in two test sessions separated by approximately 7days. The movement consisted of 3 sets of 8 repetitions at 60% of the three repetition maximum in bench press. Muscle synergies were extracted from electromyography data of 13 muscles, using nonnegative matrix factorization. To evaluate between-day reliability, we performed a cross-correlation analysis and a cross-validation analysis, in which the synergy components extracted in the first test session were recomputed, using the fixed synergy components from the second test session. Two muscle synergies accounted for >90% of the total variance, and reflected the concentric and eccentric phase, respectively. The cross-correlation values were strong to very strong (r-values between 0.58 and 0.89), while the cross-validation values ranged from substantial to almost perfect (ICC3, 1 values between 0.70 and 0.95). The present findings revealed that the same general structure of the muscle synergies was present across days and the extraction of muscle synergies is thus deemed reliable. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Development of a Finite Volume Method for Modeling Sound in Coastal Ocean Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Wen; Yang, Zhaoqing; Copping, Andrea E.
: As the rapid growth of marine renewable energy and off-shore wind energy, there have been concerns that the noises generated from construction and operation of the devices may interfere marine animals’ communication. In this research, a underwater sound model is developed to simulate sound prorogation generated by marine-hydrokinetic energy (MHK) devices or offshore wind (OSW) energy platforms. Finite volume and finite difference methods are developed to solve the 3D Helmholtz equation of sound propagation in the coastal environment. For finite volume method, the grid system consists of triangular grids in horizontal plane and sigma-layers in vertical dimension. A 3Dmore » sparse matrix solver with complex coefficients is formed for solving the resulting acoustic pressure field. The Complex Shifted Laplacian Preconditioner (CSLP) method is applied to efficiently solve the matrix system iteratively with MPI parallelization using a high performance cluster. The sound model is then coupled with the Finite Volume Community Ocean Model (FVCOM) for simulating sound propagation generated by human activities in a range-dependent setting, such as offshore wind energy platform constructions and tidal stream turbines. As a proof of concept, initial validation of the finite difference solver is presented for two coastal wedge problems. Validation of finite volume method will be reported separately.« less
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-01-01
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169
Hagelstein, V; Ortland, I; Wilmer, A; Mitchell, S A; Jaehde, U
2016-12-01
Integrating the patient's perspective has become an increasingly important component of adverse event reporting. The National Cancer Institute has developed a Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE™). This instrument has been translated into German and linguistically validated; however, its quantitative measurement properties have not been evaluated. A German language survey that included 31 PRO-CTCAE items, as well as the EORTC QLQ-C30 and the Oral Mucositis Daily Questionnaire (OMDQ), was distributed at 10 cancer treatment settings in Germany and Austria. Item quality was assessed by analysis of acceptability and comprehensibility. Reliability was evaluated by using Cronbach's' alpha and validity by principal components analysis (PCA), multitrait-multimethod matrix (MTMM) and known groups validity techniques. Of 660 surveys distributed to the study centres, 271 were returned (return rate 41%), and data from 262 were available for analysis. Participants' median age was 59.7 years, and 69.5% of the patients were female. Analysis of item quality supported the comprehensibility of the 31 PRO-CTCAE items. Reliability was very good; Cronbach's' alpha correlation coefficients were >0.9 for almost all item clusters. Construct validity of the PRO-CTCAE core item set was shown by identifying 10 conceptually meaningful item clusters via PCA. Moreover, construct validity was confirmed by the MTMM: monotrait-heteromethod comparison showed 100% high correlation, whereas heterotrait-monomethod comparison indicated 0% high correlation. Known groups validity was supported; PRO-CTCAE scores were significantly lower for those with impaired versus preserved health-related quality of life. A set of 31 items drawn from the German PRO-CTCAE item library demonstrated favourable measurement properties. These findings add to the body of evidence that PRO-CTCAE provides a rigorous method to capture patient self-reports of symptomatic toxicity for use in cancer clinical trials. © The Author 2016. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
2014-01-01
Background The aim of this discovery study was the identification of peptide serum biomarkers for detecting biliary tract cancer (BTC) using samples from healthy volunteers and benign cases of biliary disease as control groups. This work was based on the hypothesis that cancer-specific exopeptidases exist and that their activities in serum can generate cancer-predictive peptide fragments from circulating proteins during coagulation. Methods This case control study used a semi-automated platform incorporating polypeptide extraction linked to matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOF MS) to profile 92 patient serum samples. Predictive models were generated to test a validation serum set from BTC cases and healthy volunteers. Results Several peptide peaks were found that could significantly differentiate BTC patients from healthy controls and benign biliary disease. A predictive model resulted in a sensitivity of 100% and a specificity of 93.8% in detecting BTC in the validation set, whilst another model gave a sensitivity of 79.5% and a specificity of 83.9% in discriminating BTC from benign biliary disease samples in the training set. Discriminatory peaks were identified by tandem MS as fragments of abundant clotting proteins. Conclusions Serum MALDI MS peptide signatures can accurately discriminate patients with BTC from healthy volunteers. PMID:24495412
ERIC Educational Resources Information Center
Van Deth, Leah M.
2013-01-01
The purpose of the present study was to investigate the validity of the Culture-Language Interpretive Matrix (C-LIM; Flanagan, Ortiz, & Alfonso, 2013) when applied to scores from the Kaufman Assessment Battery for Children, 2nd Edition (KABC-II; Kaufman & Kaufman, 2004). Data were analyzed from the KABC-II standardization sample as well as…
NASA Astrophysics Data System (ADS)
Ishida, Keiichi
2018-05-01
This paper aims to show capability of the Orderable Matrix of Jacques Bertin which is a visualization method of data analyze and/or a method to recognize data. That matrix can show the data by replacing numbers to visual element. As an example, using a set of data regarding natural hazard rankings for certain metropolitan cities in the world, this paper describes how the Orderable Matrix handles the data set and show characteristic factors of this data to understand it. Not only to see a kind of risk ranking of cities, the Orderable Matrix shows how differently danger concerned cities ones and others are. Furthermore, we will see that the visualized data by Orderable Matrix allows us to see the characteristics of the data set comprehensively and instantaneously.
Mukhopadhyay, Anirban; Maulik, Ujjwal; Bandyopadhyay, Sanghamitra
2012-01-01
Identification of potential viral-host protein interactions is a vital and useful approach towards development of new drugs targeting those interactions. In recent days, computational tools are being utilized for predicting viral-host interactions. Recently a database containing records of experimentally validated interactions between a set of HIV-1 proteins and a set of human proteins has been published. The problem of predicting new interactions based on this database is usually posed as a classification problem. However, posing the problem as a classification one suffers from the lack of biologically validated negative interactions. Therefore it will be beneficial to use the existing database for predicting new viral-host interactions without the need of negative samples. Motivated by this, in this article, the HIV-1–human protein interaction database has been analyzed using association rule mining. The main objective is to identify a set of association rules both among the HIV-1 proteins and among the human proteins, and use these rules for predicting new interactions. In this regard, a novel association rule mining technique based on biclustering has been proposed for discovering frequent closed itemsets followed by the association rules from the adjacency matrix of the HIV-1–human interaction network. Novel HIV-1–human interactions have been predicted based on the discovered association rules and tested for biological significance. For validation of the predicted new interactions, gene ontology-based and pathway-based studies have been performed. These studies show that the human proteins which are predicted to interact with a particular viral protein share many common biological activities. Moreover, literature survey has been used for validation purpose to identify some predicted interactions that are already validated experimentally but not present in the database. Comparison with other prediction methods is also discussed. PMID:22539940
Snorradóttir, Bergthóra S; Jónsdóttir, Fjóla; Sigurdsson, Sven Th; Másson, Már
2014-08-01
A model is presented for transdermal drug delivery from single-layered silicone matrix systems. The work is based on our previous results that, in particular, extend the well-known Higuchi model. Recently, we have introduced a numerical transient model describing matrix systems where the drug dissolution can be non-instantaneous. Furthermore, our model can describe complex interactions within a multi-layered matrix and the matrix to skin boundary. The power of the modelling approach presented here is further illustrated by allowing the possibility of a donor solution. The model is validated by a comparison with experimental data, as well as validating the parameter values against each other, using various configurations with donor solution, silicone matrix and skin. Our results show that the model is a good approximation to real multi-layered delivery systems. The model offers the ability of comparing drug release for ibuprofen and diclofenac, which cannot be analysed by the Higuchi model because the dissolution in the latter case turns out to be limited. The experiments and numerical model outlined in this study could also be adjusted to more general formulations, which enhances the utility of the numerical model as a design tool for the development of drug-loaded matrices for trans-membrane and transdermal delivery. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Sparse PCA with Oracle Property.
Gu, Quanquan; Wang, Zhaoran; Liu, Han
In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.
Sparse PCA with Oracle Property
Gu, Quanquan; Wang, Zhaoran; Liu, Han
2014-01-01
In this paper, we study the estimation of the k-dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-k, and attains a s/n statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets. PMID:25684971
The multilingual matrix test: Principles, applications, and comparison across languages: A review.
Kollmeier, Birger; Warzybok, Anna; Hochmuth, Sabine; Zokoll, Melanie A; Uslar, Verena; Brand, Thomas; Wagener, Kirsten C
2015-01-01
A review of the development, evaluation, and application of the so-called 'matrix sentence test' for speech intelligibility testing in a multilingual society is provided. The format allows for repeated use with the same patient in her or his native language even if the experimenter does not understand the language. Using a closed-set format, the syntactically fixed, semantically unpredictable sentences (e.g. 'Peter bought eight white ships') provide a vocabulary of 50 words (10 alternatives for each position in the sentence). The principles (i.e. construction, optimization, evaluation, and validation) for 14 different languages are reviewed. Studies of the influence of talker, language, noise, the training effect, open vs. closed conduct of the test, and the subjects' language proficiency are reported and application examples are discussed. The optimization principles result in a steep intelligibility function and a high homogeneity of the speech materials presented and test lists employed, yielding a high efficiency and excellent comparability across languages. The characteristics of speakers generally dominate the differences across languages. The matrix test format with the principles outlined here is recommended for producing efficient, reliable, and comparable speech reception thresholds across different languages.
Matrix metalloproteinase-2 plays a critical role in overload induced skeletal muscle hypertrophy.
Zhang, Qia; Joshi, Sunil K; Lovett, David H; Zhang, Bryon; Bodine, Sue; Kim, Hubert T; Liu, Xuhui
2014-01-01
extracellular matrix (ECM) components are instrumental in maintaining homeostasis and muscle fiber functional integrity. Skeletal muscle hypertrophy is associated with ECM remodeling. Specifically, recent studies have reported the involvement of matrix metalloproteinases (MMPs) in muscle ECM remodeling. However, the functional role of MMPs in muscle hypertrophy remains largely unknown. in this study, we examined the role of MMP-2 in skeletal muscle hypertrophy using a previously validated method where the plantaris muscle of mice were subjected to mechanical overload due to the surgical removal of synergist muscles (gastrocnemius and soleus). following two weeks of overload, we observed a significant increase in MMP-2 activity and up-regulation of ECM components and remodeling enzymes in the plantaris muscles of wild-type mice. However, MMP-2 knockout mice developed significantly less hypertrophy and ECM remodeling in response to overload compared to their wild-type littermates. Investigation of protein synthesis rate and Akt/mTOR signaling revealed no difference between wild-type and MMP-2 knockout mice, suggesting that a difference in hypertrophy was independent of protein synthesis. taken together, our results suggest that MMP-2 is a key mediator of ECM remodeling in the setting of skeletal muscle hypertrophy.
Matrix metalloproteinase-2 plays a critical role in overload induced skeletal muscle hypertrophy.
Zhang, Qia; Joshi, Sunil K; Lovett, David H; Zhang, Bryon; Bodine, Sue; Kim, Hubert; Liu, Xuhui
2014-07-01
extracellular matrix (ECM) components are instrumental in maintaining homeostasis and muscle fiber functional integrity. Skeletal muscle hypertrophy is associated with ECM remodeling. Specifically, recent studies have reported the involvement of matrix metalloproteinases (MMPs) in muscle ECM remodeling. However, the functional role of MMPs in muscle hypertrophy remains largely unknown. in this study, we examined the role of MMP-2 in skeletal muscle hypertrophy using a previously validated method where the plantaris muscle of mice were subjected to mechanical overload due to the surgical removal of synergist muscles (gastrocnemius and soleus). following two weeks of overload, we observed a significant increase in MMP-2 activity and up-regulation of ECM components and remodeling enzymes in the plantaris muscles of wild-type mice. However, MMP-2 knockout mice developed significantly less hypertrophy and ECM remodeling in response to overload compared to their wild-type littermates. Investigation of protein synthesis rate and Akt/mTOR signaling revealed no difference between wild-type and MMP-2 knockout mice, suggesting that a difference in hypertrophy was independent of protein synthesis. taken together, our results suggest that MMP-2 is a key mediator of ECM remodeling in the setting of skeletal muscle hypertrophy.
A three-dimensional nonlinear Timoshenko beam based on the core-congruential formulation
NASA Technical Reports Server (NTRS)
Crivelli, Luis A.; Felippa, Carlos A.
1992-01-01
A three-dimensional, geometrically nonlinear two-node Timoshenkoo beam element based on the total Larangrian description is derived. The element behavior is assumed to be linear elastic, but no restrictions are placed on magnitude of finite rotations. The resulting element has twelve degrees of freedom: six translational components and six rotational-vector components. The formulation uses the Green-Lagrange strains and second Piola-Kirchhoff stresses as energy-conjugate variables and accounts for the bending-stretching and bending-torsional coupling effects without special provisions. The core-congruential formulation (CCF) is used to derived the discrete equations in a staged manner. Core equations involving the internal force vector and tangent stiffness matrix are developed at the particle level. A sequence of matrix transformations carries these equations to beam cross-sections and finally to the element nodal degrees of freedom. The choice of finite rotation measure is made in the next-to-last transformation stage, and the choice of over-the-element interpolation in the last one. The tangent stiffness matrix is found to retain symmetry if the rotational vector is chosen to measure finite rotations. An extensive set of numerical examples is presented to test and validate the present element.
NASA Technical Reports Server (NTRS)
Zhu, Lin-Fa; Kim, Soo; Chattopadhyay, Aditi; Goldberg, Robert K.
2004-01-01
A numerical procedure has been developed to investigate the nonlinear and strain rate dependent deformation response of polymer matrix composite laminated plates under high strain rate impact loadings. A recently developed strength of materials based micromechanics model, incorporating a set of nonlinear, strain rate dependent constitutive equations for the polymer matrix, is extended to account for the transverse shear effects during impact. Four different assumptions of transverse shear deformation are investigated in order to improve the developed strain rate dependent micromechanics model. The validities of these assumptions are investigated using numerical and theoretical approaches. A method to determine through the thickness strain and transverse Poisson's ratio of the composite is developed. The revised micromechanics model is then implemented into a higher order laminated plate theory which is modified to include the effects of inelastic strains. Parametric studies are conducted to investigate the mechanical response of composite plates under high strain rate loadings. Results show the transverse shear stresses cannot be neglected in the impact problem. A significant level of strain rate dependency and material nonlinearity is found in the deformation response of representative composite specimens.
Hwang, Yoo Na; Lee, Ju Hwan; Kim, Ga Young; Shin, Eun Seok; Kim, Sung Min
2018-01-01
The purpose of this study was to propose a hybrid ensemble classifier to characterize coronary plaque regions in intravascular ultrasound (IVUS) images. Pixels were allocated to one of four tissues (fibrous tissue (FT), fibro-fatty tissue (FFT), necrotic core (NC), and dense calcium (DC)) through processes of border segmentation, feature extraction, feature selection, and classification. Grayscale IVUS images and their corresponding virtual histology images were acquired from 11 patients with known or suspected coronary artery disease using 20 MHz catheter. A total of 102 hybrid textural features including first order statistics (FOS), gray level co-occurrence matrix (GLCM), extended gray level run-length matrix (GLRLM), Laws, local binary pattern (LBP), intensity, and discrete wavelet features (DWF) were extracted from IVUS images. To select optimal feature sets, genetic algorithm was implemented. A hybrid ensemble classifier based on histogram and texture information was then used for plaque characterization in this study. The optimal feature set was used as input of this ensemble classifier. After tissue characterization, parameters including sensitivity, specificity, and accuracy were calculated to validate the proposed approach. A ten-fold cross validation approach was used to determine the statistical significance of the proposed method. Our experimental results showed that the proposed method had reliable performance for tissue characterization in IVUS images. The hybrid ensemble classification method outperformed other existing methods by achieving characterization accuracy of 81% for FFT and 75% for NC. In addition, this study showed that Laws features (SSV and SAV) were key indicators for coronary tissue characterization. The proposed method had high clinical applicability for image-based tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.
Design and analysis of a high power moderate band radiator using a switched oscillator
NASA Astrophysics Data System (ADS)
Armanious, Miena Magdi Hakeem
Quarter-wave switched oscillators (SWOs) are an important technology for the generation of high-power, moderate bandwidth (mesoband) wave forms. The use of SWOs in high power microwave sources has been discussed for the past 10 years [1--6], but a detailed discussion of the design of this type of oscillators for particular waveforms has been lacking. In this dissertation I develop a design methodology for a realization of SWOs, also known as MATRIX oscillators in the scientific community. A key element in the design of SWOs is the self-breakdown switch, which is created by a large electric field. In order for the switch to close as expected from the design, it is essential to manage the electrostatic field distribution inside the oscillator during the charging time. This enforces geometric constraints on the shape of the conductors inside MATRIX. At the same time, the electrodynamic operation of MATRIX is dependent on the geometry of the structure. In order to generate a geometry that satisfies both the electrostatic and electrodynamic constraints, a new approach is developed to generate this geometry using the 2-D static solution of the Laplace equation, subject to a particular set of boundary conditions. These boundary conditions are manipulated to generate equipotential lines with specific dimensions that satisfy the electrodynamic constraints. Meanwhile, these equipotential lines naturally support an electrostatic field distribution that meets the requirements for the switch operation. To study the electrodynamic aspects of MATRIX, three different (but interrelated) numerical models are built. Depending on the assumptions made in each model, different information about the electrodynamic properties of the designed SWO are obtained. In addition, the agreement and consistency between the different models, validate and give confidence in the calculated results. Another important aspect of the design process is understanding the relationship between the geometric parameters of MATRIX and the output waveforms. Using the numerical models, the relationship between the dimensions of MATRIX and its calculated resonant parameters are studied. For a given set of geometric constraints, this provides more flexibility to the output specifications. Finally, I present a comprehensive design methodology that generates the geometry of a MATRIX system from the desired specification then calculates the radiated waveform.
Appraising the risk matrix 2000 static sex offender risk assessment tool.
Tully, Ruth J; Browne, Kevin D
2015-02-01
This critical appraisal explores the reliability and validity of the Risk Matrix 2000 static sex offender risk assessment tool that is widely used in the United Kingdom. The Risk Matrix 2000 has to some extent been empirically validated for use with adult male sex offenders; however, this review highlights that further research into the validity of this static tool with sex offender subgroups or types is necessary in order to improve practical utility. The Risk Matrix 2000 relies on static risk predictors, thus it is limited in scope. This article argues that the addition of dynamic items that have been shown to be predictive of sexual recidivism would further enhance the tool. The paper argues that adding dynamic risk items would fit better with a rehabilitative approach to sex offender risk management and assessment. This would also provide a means by which to effectively plan sex offender treatment and evaluate individual offenders' progress in treatment; however, difficulties remain in identifying and assessing dynamic risk factors of sexual offending and so further research is required. © The Author(s) 2013.
The improved Apriori algorithm based on matrix pruning and weight analysis
NASA Astrophysics Data System (ADS)
Lang, Zhenhong
2018-04-01
This paper uses the matrix compression algorithm and weight analysis algorithm for reference and proposes an improved matrix pruning and weight analysis Apriori algorithm. After the transactional database is scanned for only once, the algorithm will construct the boolean transaction matrix. Through the calculation of one figure in the rows and columns of the matrix, the infrequent item set is pruned, and a new candidate item set is formed. Then, the item's weight and the transaction's weight as well as the weight support for items are calculated, thus the frequent item sets are gained. The experimental result shows that the improved Apriori algorithm not only reduces the number of repeated scans of the database, but also improves the efficiency of data correlation mining.
Comprehensive GMO detection using real-time PCR array: single-laboratory validation.
Mano, Junichi; Harada, Mioko; Takabatake, Reona; Furui, Satoshi; Kitta, Kazumi; Nakamura, Kosuke; Akiyama, Hiroshi; Teshima, Reiko; Noritake, Hiromichi; Hatano, Shuko; Futo, Satoshi; Minegishi, Yasutaka; Iizuka, Tayoshi
2012-01-01
We have developed a real-time PCR array method to comprehensively detect genetically modified (GM) organisms. In the method, genomic DNA extracted from an agricultural product is analyzed using various qualitative real-time PCR assays on a 96-well PCR plate, targeting for individual GM events, recombinant DNA (r-DNA) segments, taxon-specific DNAs, and donor organisms of the respective r-DNAs. In this article, we report the single-laboratory validation of both DNA extraction methods and component PCR assays constituting the real-time PCR array. We selected some DNA extraction methods for specified plant matrixes, i.e., maize flour, soybean flour, and ground canola seeds, then evaluated the DNA quantity, DNA fragmentation, and PCR inhibition of the resultant DNA extracts. For the component PCR assays, we evaluated the specificity and LOD. All DNA extraction methods and component PCR assays satisfied the criteria set on the basis of previous reports.
Interval-valued intuitionistic fuzzy matrix games based on Archimedean t-conorm and t-norm
NASA Astrophysics Data System (ADS)
Xia, Meimei
2018-04-01
Fuzzy game theory has been applied in many decision-making problems. The matrix game with interval-valued intuitionistic fuzzy numbers (IVIFNs) is investigated based on Archimedean t-conorm and t-norm. The existing matrix games with IVIFNs are all based on Algebraic t-conorm and t-norm, which are special cases of Archimedean t-conorm and t-norm. In this paper, the intuitionistic fuzzy aggregation operators based on Archimedean t-conorm and t-norm are employed to aggregate the payoffs of players. To derive the solution of the matrix game with IVIFNs, several mathematical programming models are developed based on Archimedean t-conorm and t-norm. The proposed models can be transformed into a pair of primal-dual linear programming models, based on which, the solution of the matrix game with IVIFNs is obtained. It is proved that the theorems being valid in the exiting matrix game with IVIFNs are still true when the general aggregation operator is used in the proposed matrix game with IVIFNs. The proposed method is an extension of the existing ones and can provide more choices for players. An example is given to illustrate the validity and the applicability of the proposed method.
Mathematical model of water transport in Bacon and alkaline matrix-type hydrogen-oxygen fuel cells
NASA Technical Reports Server (NTRS)
Prokopius, P. R.; Easter, R. W.
1972-01-01
Based on general mass continuity and diffusive transport equations, a mathematical model was developed that simulates the transport of water in Bacon and alkaline-matrix fuel cells. The derived model was validated by using it to analytically reproduce various Bacon and matrix-cell experimental water transport transients.
The XTT Cell Proliferation Assay Applied to Cell Layers Embedded in Three-Dimensional Matrix
Huyck, Lynn; Ampe, Christophe
2012-01-01
Abstract Cell proliferation, a main target in cancer therapy, is influenced by the surrounding three-dimensional (3D) extracellular matrix (ECM). In vitro drug screening is, thus, optimally performed under conditions in which cells are grown (embedded or trapped) in dense 3D matrices, as these most closely mimic the adhesive and mechanical properties of natural ECM. Measuring cell proliferation under these conditions is, however, technically more challenging compared with two-dimensional (2D) culture and other “3D culture conditions,” such as growth on top of a matrix (pseudo-3D) or in spongy scaffolds with large pore sizes. Consequently, such measurements are only slowly applied on a wider scale. To advance this, we report on the equal quality (dynamic range, background, linearity) of measuring the proliferation of cell layers embedded in dense 3D matrices (collagen, Matrigel) compared with cells in 2D culture using the easy (one-step) and in 2D well-validated, 2,3-bis-(2-methoxy-4-nitro-5-sulfophenyl)-2H-tetrazolium-5-carboxanilide (XTT)-assay. The comparison stresses the differences in proliferation kinetics and drug sensitivity of matrix-embedded cells versus 2D culture. Using the specific cell-layer-embedded 3D matrix setup, quantitative measurements of cell proliferation and cell invasion are shown to be possible in similar assay conditions, and cytostatic, cytotoxic, and anti-invasive drug effects can thus be reliably determined and compared in physiologically relevant settings. This approach in the 3D matrix holds promise for improving early-stage, high-throughput drug screening, targeting either highly invasive or highly proliferative subpopulations of cancers or both. PMID:22574651
Fingerprint recognition of alien invasive weeds based on the texture character and machine learning
NASA Astrophysics Data System (ADS)
Yu, Jia-Jia; Li, Xiao-Li; He, Yong; Xu, Zheng-Hao
2008-11-01
Multi-spectral imaging technique based on texture analysis and machine learning was proposed to discriminate alien invasive weeds with similar outline but different categories. The objectives of this study were to investigate the feasibility of using Multi-spectral imaging, especially the near-infrared (NIR) channel (800 nm+/-10 nm) to find the weeds' fingerprints, and validate the performance with specific eigenvalues by co-occurrence matrix. Veronica polita Pries, Veronica persica Poir, longtube ground ivy, Laminum amplexicaule Linn. were selected in this study, which perform different effect in field, and are alien invasive species in China. 307 weed leaves' images were randomly selected for the calibration set, while the remaining 207 samples for the prediction set. All images were pretreated by Wallis filter to adjust the noise by uneven lighting. Gray level co-occurrence matrix was applied to extract the texture character, which shows density, randomness correlation, contrast and homogeneity of texture with different algorithms. Three channels (green channel by 550 nm+/-10 nm, red channel by 650 nm+/-10 nm and NIR channel by 800 nm+/-10 nm) were respectively calculated to get the eigenvalues.Least-squares support vector machines (LS-SVM) was applied to discriminate the categories of weeds by the eigenvalues from co-occurrence matrix. Finally, recognition ratio of 83.35% by NIR channel was obtained, better than the results by green channel (76.67%) and red channel (69.46%). The prediction results of 81.35% indicated that the selected eigenvalues reflected the main characteristics of weeds' fingerprint based on multi-spectral (especially by NIR channel) and LS-SVM model.
A mapping from the unitary to doubly stochastic matrices and symbols on a finite set
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
2008-11-01
We prove that the mapping from the unitary to doubly stochastic matrices that maps a unitary matrix (ukl) to the doubly stochastic matrix (|ukl|2) is a submersion at a generic unitary matrix. The proof uses the framework of operator symbols on a finite set.
Bellez, Sami; Bourlier, Christophe; Kubické, Gildas
2015-03-01
This paper deals with the evaluation of electromagnetic scattering from a three-dimensional structure consisting of two nested homogeneous dielectric bodies with arbitrary shape. The scattering problem is formulated in terms of a set of Poggio-Miller-Chang-Harrington-Wu integral equations that are afterwards converted into a system of linear equations (impedance matrix equation) by applying the Galerkin method of moments (MoM) with Rao-Wilton-Glisson basis functions. The MoM matrix equation is then solved by deploying the iterative propagation-inside-layer expansion (PILE) method in order to obtain the unknown surface current densities, which are thereafter used to handle the radar cross-section (RCS) patterns. Some numerical results for various structures including canonical geometries are presented and compared with those of the FEKO software in order to validate the PILE-based approach as well as to show its efficiency to analyze the full-polarized RCS patterns.
Coupled Modeling of Hydrodynamics and Sound in Coastal Ocean for Renewable Ocean Energy Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Wen; Jung, Ki Won; Yang, Zhaoqing
An underwater sound model was developed to simulate sound propagation from marine and hydrokinetic energy (MHK) devices or offshore wind (OSW) energy platforms. Finite difference methods were developed to solve the 3D Helmholtz equation for sound propagation in the coastal environment. A 3D sparse matrix solver with complex coefficients was formed for solving the resulting acoustic pressure field. The Complex Shifted Laplacian Preconditioner (CSLP) method was applied to solve the matrix system iteratively with MPI parallelization using a high performance cluster. The sound model was then coupled with the Finite Volume Community Ocean Model (FVCOM) for simulating sound propagation generatedmore » by human activities, such as construction of OSW turbines or tidal stream turbine operations, in a range-dependent setting. As a proof of concept, initial validation of the solver is presented for two coastal wedge problems. This sound model can be useful for evaluating impacts on marine mammals due to deployment of MHK devices and OSW energy platforms.« less
Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke
2018-02-01
In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferraioli, Luigi; Hueller, Mauro; Vitale, Stefano
The scientific objectives of the LISA Technology Package experiment on board of the LISA Pathfinder mission demand accurate calibration and validation of the data analysis tools in advance of the mission launch. The level of confidence required in the mission outcomes can be reached only by intensively testing the tools on synthetically generated data. A flexible procedure allowing the generation of a cross-correlated stationary noise time series was set up. A multichannel time series with the desired cross-correlation behavior can be generated once a model for a multichannel cross-spectral matrix is provided. The core of the procedure comprises a noisemore » coloring, multichannel filter designed via a frequency-by-frequency eigendecomposition of the model cross-spectral matrix and a subsequent fit in the Z domain. The common problem of initial transients in a filtered time series is solved with a proper initialization of the filter recursion equations. The noise generator performance was tested in a two-dimensional case study of the closed-loop LISA Technology Package dynamics along the two principal degrees of freedom.« less
Tensile Properties of Polymeric Matrix Composites Subjected to Cryogenic Environments
NASA Technical Reports Server (NTRS)
Whitley, Karen S.; Gates, Thomas S.
2004-01-01
Polymer matrix composites (PMC s) have seen limited use as structural materials in cryogenic environments. One reason for the limited use of PMC s in cryogenic structures is a design philosophy that typically requires a large, validated database of material properties in order to ensure a reliable and defect free structure. It is the intent of this paper to provide an initial set of mechanical properties developed from experimental data of an advanced PMC (IM7/PETI-5) exposed to cryogenic temperatures and mechanical loading. The application of this data is to assist in the materials down-select and design of cryogenic fuel tanks for future reusable space vehicles. The details of the material system, test program, and experimental methods will be outlined. Tension modulus and strength were measured at room temperature, -196 C, and -269 C on five different laminates. These properties were also tested after aging at -186 C with and without loading applied. Microcracking was observed in one laminate.
NASA Astrophysics Data System (ADS)
Ramella-Roman, Jessica C.; Gonzalez, Mariacarla; Chue-Sang, Joseph; Montejo, Karla; Krup, Karl; Srinivas, Vijaya; DeHoog, Edward; Madhivanan, Purnima
2018-04-01
Mueller Matrix polarimetry can provide useful information about the function and structure of the extracellular matrix. Mueller Matrix systems are sophisticated and costly optical tools that have been used primarily in the laboratory or in hospital settings. Here we introduce a low-cost snapshot Mueller Matrix polarimeter that that does not require external power, has no moving parts, and can acquire a full Mueller Matrix in less than 50 milliseconds. We utilized this technology in the study of cervical cancer in Mysore India, yet the system could be translated in multiple diagnostic applications.
Dinç, Erdal; Ustündağ, Ozgür; Baleanu, Dumitru
2010-08-01
The sole use of pyridoxine hydrochloride during treatment of tuberculosis gives rise to pyridoxine deficiency. Therefore, a combination of pyridoxine hydrochloride and isoniazid is used in pharmaceutical dosage form in tuberculosis treatment to reduce this side effect. In this study, two chemometric methods, partial least squares (PLS) and principal component regression (PCR), were applied to the simultaneous determination of pyridoxine (PYR) and isoniazid (ISO) in their tablets. A concentration training set comprising binary mixtures of PYR and ISO consisting of 20 different combinations were randomly prepared in 0.1 M HCl. Both multivariate calibration models were constructed using the relationships between the concentration data set (concentration data matrix) and absorbance data matrix in the spectral region 200-330 nm. The accuracy and the precision of the proposed chemometric methods were validated by analyzing synthetic mixtures containing the investigated drugs. The recovery results obtained by applying PCR and PLS calibrations to the artificial mixtures were found between 100.0 and 100.7%. Satisfactory results obtained by applying the PLS and PCR methods to both artificial and commercial samples were obtained. The results obtained in this manuscript strongly encourage us to use them for the quality control and the routine analysis of the marketing tablets containing PYR and ISO drugs. Copyright © 2010 John Wiley & Sons, Ltd.
A vector matching method for analysing logic Petri nets
NASA Astrophysics Data System (ADS)
Du, YuYue; Qi, Liang; Zhou, MengChu
2011-11-01
Batch processing function and passing value indeterminacy in cooperative systems can be described and analysed by logic Petri nets (LPNs). To directly analyse the properties of LPNs, the concept of transition enabling vector sets is presented and a vector matching method used to judge the enabling transitions is proposed in this article. The incidence matrix of LPNs is defined; an equation about marking change due to a transition's firing is given; and a reachable tree is constructed. The state space explosion is mitigated to a certain extent from directly analysing LPNs. Finally, the validity and reliability of the proposed method are illustrated by an example in electronic commerce.
A slewing control experiment for flexible structures
NASA Technical Reports Server (NTRS)
Juang, J.-N.; Horta, L. G.; Robertshaw, H. H.
1985-01-01
A hardware set-up has been developed to study slewing control for flexible structures including a steel beam and a solar panel. The linear optimal terminal control law is used to design active controllers which are implemented in an analog computer. The objective of this experiment is to demonstrate and verify the dynamics and optimal terminal control laws as applied to flexible structures for large angle maneuver. Actuation is provided by an electric motor while sensing is given by strain gages and angle potentiometer. Experimental measurements are compared with analytical predictions in terms of modal parameters of the system stability matrix and sufficient agreement is achieved to validate the theory.
Validation studies and proficiency testing.
Ankilam, Elke; Heinze, Petra; Kay, Simon; Van den Eede, Guy; Popping, Bert
2002-01-01
Genetically modified organisms (GMOs) entered the European food market in 1996. Current legislation demands the labeling of food products if they contain <1% GMO, as assessed for each ingredient of the product. To create confidence in the testing methods and to complement enforcement requirements, there is an urgent need for internationally validated methods, which could serve as reference methods. To date, several methods have been submitted to validation trials at an international level; approaches now exist that can be used in different circumstances and for different food matrixes. Moreover, the requirement for the formal validation of methods is clearly accepted; several national and international bodies are active in organizing studies. Further validation studies, especially on the quantitative polymerase chain reaction methods, need to be performed to cover the rising demand for new extraction methods and other background matrixes, as well as for novel GMO constructs.
SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.
Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen
2012-07-23
We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.
Selection of representative embankments based on rough set - fuzzy clustering method
NASA Astrophysics Data System (ADS)
Bin, Ou; Lin, Zhi-xiang; Fu, Shu-yan; Gao, Sheng-song
2018-02-01
The premise condition of comprehensive evaluation of embankment safety is selection of representative unit embankment, on the basis of dividing the unit levee the influencing factors and classification of the unit embankment are drafted.Based on the rough set-fuzzy clustering, the influence factors of the unit embankment are measured by quantitative and qualitative indexes.Construct to fuzzy similarity matrix of standard embankment then calculate fuzzy equivalent matrix of fuzzy similarity matrix by square method. By setting the threshold of the fuzzy equivalence matrix, the unit embankment is clustered, and the representative unit embankment is selected from the classification of the embankment.
Rojas, Cristian; Duchowicz, Pablo R; Tripaldi, Piercosimo; Pis Diez, Reinaldo
2015-11-27
A quantitative structure-property relationship (QSPR) was developed for modeling the retention index of 1184 flavor and fragrance compounds measured using a Carbowax 20M glass capillary gas chromatography column. The 4885 molecular descriptors were calculated using Dragon software, and then were simultaneously analyzed through multivariable linear regression analysis using the replacement method (RM) variable subset selection technique. We proceeded in three steps, the first one by considering all descriptor blocks, the second one by excluding conformational descriptor blocks, and the last one by analyzing only 3D-descriptor families. The models were validated through an external test set of compounds. Cross-validation methods such as leave-one-out and leave-many-out were applied, together with Y-randomization and applicability domain analysis. The developed model was used to estimate the I of a set of 22 molecules. The results clearly suggest that 3D-descriptors do not offer relevant information for modeling the retention index, while a topological index such as the Randić-like index from reciprocal squared distance matrix has a high relevance for this purpose. Copyright © 2015 Elsevier B.V. All rights reserved.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H. Lee; Ganti, Anand; Resnick, David R
2013-10-22
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Design, decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-06-17
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-11-18
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Sakhavand, Navid; Shahsavari, Rouzbeh
2015-03-16
Many natural and biomimetic platelet-matrix composites--such as nacre, silk, and clay-polymer-exhibit a remarkable balance of strength, toughness and/or stiffness, which call for a universal measure to quantify this outstanding feature given the structure and material characteristics of the constituents. Analogously, there is an urgent need to quantify the mechanics of emerging electronic and photonic systems such as stacked heterostructures. Here we report the development of a unified framework to construct universal composition-structure-property diagrams that decode the interplay between various geometries and inherent material features in both platelet-matrix composites and stacked heterostructures. We study the effects of elastic and elastic-perfectly plastic matrices, overlap offset ratio and the competing mechanisms of platelet versus matrix failures. Validated by several 3D-printed specimens and a wide range of natural and synthetic materials across scales, the proposed universally valid diagrams have important implications for science-based engineering of numerous platelet-matrix composites and stacked heterostructures.
Xu, Jing; Xu, Bin; Tang, Chuanhao; Li, Xiaoyan; Qin, Haifeng; Wang, Weixia; Wang, Hong; Wang, Zhongyuan; Li, Liangliang; Li, Zhihua; Gao, Hongjun
2017-01-01
Background. Diagnoses of malignant pleural effusion (MPE) are a crucial problem in clinics. In our study, we compared the peptide profiles of MPE and tuberculosis pleural effusion (TPE) to investigate the value of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) in diagnosis of MPE. Material and Methods. The 46 MPE and 32 TPE were randomly assigned to training set and validation set. Peptides were isolated by weak cation exchange magnetic beads and peaks in the m/z range of 800–10000 Da were analyzed. Comparing the peptide profile between 30 MPE and 22 TPE samples in training set by ClinProTools software, we screened the specific biomarkers and established a MALDI-TOF-MS classification of MPE. Finally, the other 16 MPE and 10 TPE were included to verify the model. We additionally determined carcinoembryonic antigen (CEA) in MPE and TPE samples using electrochemiluminescent immunoassay method. Results. Five peptide peaks (917.37 Da, 4469.39 Da, 1466.5 Da, 4585.21 Da, and 3216.87 Da) were selected to separate MPE and TPE by MALDI-TOF-MS. The sensitivity, specificity, and accuracy of the classification were 93.75%, 100%, and 96.15%, respectively, after blinded test. The sensitivity of CEA was significantly lower than MALDI-TOF-MS classification (P = 0.035). Conclusions. The results indicate MALDI-TOF-MS is a potential method for diagnosing MPE. PMID:28386154
Xu, Jing; Xu, Bin; Tang, Chuanhao; Li, Xiaoyan; Qin, Haifeng; Wang, Weixia; Wang, Hong; Wang, Zhongyuan; Li, Liangliang; Li, Zhihua; Gao, Hongjun; He, Kun; Liu, Xiaoqing
2017-01-01
Background . Diagnoses of malignant pleural effusion (MPE) are a crucial problem in clinics. In our study, we compared the peptide profiles of MPE and tuberculosis pleural effusion (TPE) to investigate the value of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) in diagnosis of MPE. Material and Methods . The 46 MPE and 32 TPE were randomly assigned to training set and validation set. Peptides were isolated by weak cation exchange magnetic beads and peaks in the m / z range of 800-10000 Da were analyzed. Comparing the peptide profile between 30 MPE and 22 TPE samples in training set by ClinProTools software, we screened the specific biomarkers and established a MALDI-TOF-MS classification of MPE. Finally, the other 16 MPE and 10 TPE were included to verify the model. We additionally determined carcinoembryonic antigen (CEA) in MPE and TPE samples using electrochemiluminescent immunoassay method. Results . Five peptide peaks (917.37 Da, 4469.39 Da, 1466.5 Da, 4585.21 Da, and 3216.87 Da) were selected to separate MPE and TPE by MALDI-TOF-MS. The sensitivity, specificity, and accuracy of the classification were 93.75%, 100%, and 96.15%, respectively, after blinded test. The sensitivity of CEA was significantly lower than MALDI-TOF-MS classification ( P = 0.035). Conclusions . The results indicate MALDI-TOF-MS is a potential method for diagnosing MPE.
Near infrared spectroscopy for prediction of antioxidant compounds in the honey.
Escuredo, Olga; Seijo, M Carmen; Salvador, Javier; González-Martín, M Inmaculada
2013-12-15
The selection of antioxidant variables in honey is first time considered applying the near infrared (NIR) spectroscopic technique. A total of 60 honey samples were used to develop the calibration models using the modified partial least squares (MPLS) regression method and 15 samples were used for external validation. Calibration models on honey matrix for the estimation of phenols, flavonoids, vitamin C, antioxidant capacity (DPPH), oxidation index and copper using near infrared (NIR) spectroscopy has been satisfactorily obtained. These models were optimised by cross-validation, and the best model was evaluated according to multiple correlation coefficient (RSQ), standard error of cross-validation (SECV), ratio performance deviation (RPD) and root mean standard error (RMSE) in the prediction set. The result of these statistics suggested that the equations developed could be used for rapid determination of antioxidant compounds in honey. This work shows that near infrared spectroscopy can be considered as rapid tool for the nondestructive measurement of antioxidant constitutes as phenols, flavonoids, vitamin C and copper and also the antioxidant capacity in the honey. Copyright © 2013 Elsevier Ltd. All rights reserved.
Rank estimation and the multivariate analysis of in vivo fast-scan cyclic voltammetric data
Keithley, Richard B.; Carelli, Regina M.; Wightman, R. Mark
2010-01-01
Principal component regression has been used in the past to separate current contributions from different neuromodulators measured with in vivo fast-scan cyclic voltammetry. Traditionally, a percent cumulative variance approach has been used to determine the rank of the training set voltammetric matrix during model development, however this approach suffers from several disadvantages including the use of arbitrary percentages and the requirement of extreme precision of training sets. Here we propose that Malinowski’s F-test, a method based on a statistical analysis of the variance contained within the training set, can be used to improve factor selection for the analysis of in vivo fast-scan cyclic voltammetric data. These two methods of rank estimation were compared at all steps in the calibration protocol including the number of principal components retained, overall noise levels, model validation as determined using a residual analysis procedure, and predicted concentration information. By analyzing 119 training sets from two different laboratories amassed over several years, we were able to gain insight into the heterogeneity of in vivo fast-scan cyclic voltammetric data and study how differences in factor selection propagate throughout the entire principal component regression analysis procedure. Visualizing cyclic voltammetric representations of the data contained in the retained and discarded principal components showed that using Malinowski’s F-test for rank estimation of in vivo training sets allowed for noise to be more accurately removed. Malinowski’s F-test also improved the robustness of our criterion for judging multivariate model validity, even though signal-to-noise ratios of the data varied. In addition, pH change was the majority noise carrier of in vivo training sets while dopamine prediction was more sensitive to noise. PMID:20527815
System for information discovery
Pennock, Kelly A [Richland, WA; Miller, Nancy E [Kennewick, WA
2002-11-19
A sequence of word filters are used to eliminate terms in the database which do not discriminate document content, resulting in a filtered word set and a topic word set whose members are highly predictive of content. These two word sets are then formed into a two dimensional matrix with matrix entries calculated as the conditional probability that a document will contain a word in a row given that it contains the word in a column. The matrix representation allows the resultant vectors to be utilized to interpret document contents.
ERIC Educational Resources Information Center
Meredith, Keith E.; Sabers, Darrell L.
Data required for evaluating a Criterion Referenced Measurement (CRM) is described with a matrix. The information within the matrix consists of the "pass-fail" decisions of two CRMs. By differentially defining these two CRMs, different concepts of reliability and validity can be examined. Indices suggested for analyzing the matrix are listed with…
2013-01-01
Background Pancreatic cancer (PC) is an aggressive disease with an urgent need for biomarkers. Hallmarks of PC include increased collagen deposition (desmoplasia) and increased matrix metalloproteinase (MMP) activity. The aim of this study was to investigate whether protein fingerprints of specific MMP-generated collagen fragments differentiate PC patients from healthy controls when measured in serum. Methods The levels of biomarkers reflecting MMP-mediated degradation of type I (C1M), type III (C3M) and type IV (C4M, C4M12a1) collagen were assessed in serum samples from PC patients (n = 15) and healthy controls (n = 33) using well-characterized and validated competitive ELISAs. Results The MMP-generated collagen fragments were significantly elevated in serum from PC patients as compared to controls. The diagnostic power of C1M, C3M, C4M and C4M12 were ≥83% (p < 0.001) and when combining all biomarkers 99% (p < 0.0001). Conclusions A panel of serum biomarkers reflecting altered MMP-mediated collagen turnover is able to differentiate PC patients from healthy controls. These markers may increase the understanding of mode of action of the disease and, if validated in larger clinical studies, provide an improved and additional tool in the PC setting. PMID:24261855
Three-Class Mammogram Classification Based on Descriptive CNN Features
Zhang, Qianni; Jadoon, Adeel
2017-01-01
In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques. PMID:28191461
Three-Class Mammogram Classification Based on Descriptive CNN Features.
Jadoon, M Mohsin; Zhang, Qianni; Haq, Ihsan Ul; Butt, Sharjeel; Jadoon, Adeel
2017-01-01
In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.
A note on the upper bound of the spectral radius for SOR iteration matrix
NASA Astrophysics Data System (ADS)
Chang, D.-W. Da-Wei
2004-05-01
Recently, Wang and Huang (J. Comput. Appl. Math. 135 (2001) 325, Corollary 4.7) established the following estimation on the upper bound of the spectral radius for successive overrelaxation (SOR) iteration matrix:ρSOR≤1-ω+ωρGSunder the condition that the coefficient matrix A is a nonsingular M-matrix and ω≥1, where ρSOR and ρGS are the spectral radius of SOR iteration matrix and Gauss-Seidel iteration matrix, respectively. In this note, we would like to point out that the above estimation is not valid in general.
Lozano, Ana; Rajski, Łukasz; Uclés, Samanta; Belmonte-Valles, Noelia; Mezcua, Milagros; Fernández-Alba, Amadeo R
2014-01-01
Two sorbents containing ZrO₂ (Z-Sep and Z-Sep+) were tested as a d-SPE clean-up in combination with the QuEChERS and ethyl acetate multiresidue method in the pesticide residues extraction in avocado. All extracts were analysed using gas chromatography coupled with a triple quadrupole mass spectrometer working in multi-reaction monitoring mode. GC QToF was used to compare the amount of matrix compounds present in the final extracts, prepared according to different protocols. The highest number of pesticides with acceptable recoveries and the lowest amount of coextracted matrix compounds were provided by QuEChERS with Z-Sep. Subsequently, this method was fully validated in avocado and almonds. Validation studies were carried out according to DG Sanco guidelines including: the evaluation of recoveries at two levels (10 and 50 μg/kg), limit of quantitation, linearity, matrix effects, as well as interday and intraday precision. In avocado, 166 pesticides were fully validated compared to 119 in almonds. The method was operated satisfactorily in routine analysis and was applied to real samples. © 2013 Published by Elsevier B.V.
Vlieg-Boerstra, Berber J; Bijleveld, Charles M A; van der Heide, Sicco; Beusekamp, Berta J; Wolt-Plompen, Saskia A A; Kukler, Jeanet; Brinkman, Joep; Duiverman, Eric J; Dubois, Anthony E J
2004-02-01
The use of double-blind, placebo-controlled food challenges (DBPCFCs) is considered the gold standard for the diagnosis of food allergy. Despite this, materials and methods used in DBPCFCs have not been standardized. The purpose of this study was to develop and validate recipes for use in DBPCFCs in children by using allergenic foods, preferably in their usual edible form. Recipes containing milk, soy, cooked egg, raw whole egg, peanut, hazelnut, and wheat were developed. For each food, placebo and active test food recipes were developed that met the requirements of acceptable taste, allowance of a challenge dose high enough to elicit reactions in an acceptable volume, optimal matrix ingredients, and good matching of sensory properties of placebo and active test food recipes. Validation was conducted on the basis of sensory tests for difference by using the triangle test and the paired comparison test. Recipes were first tested by volunteers from the hospital staff and subsequently by a professional panel of food tasters in a food laboratory designed for sensory testing. Recipes were considered to be validated if no statistically significant differences were found. Twenty-seven recipes were developed and found to be valid by the volunteer panel. Of these 27 recipes, 17 could be validated by the professional panel. Sensory testing with appropriate statistical analysis allows for objective validation of challenge materials. We recommend the use of professional tasters in the setting of a food laboratory for best results.
On using the Hilbert transform for blind identification of complex modes: A practical approach
NASA Astrophysics Data System (ADS)
Antunes, Jose; Debut, Vincent; Piteau, Pilippe; Delaune, Xavier; Borsoi, Laurent
2018-01-01
The modal identification of dynamical systems under operational conditions, when subjected to wide-band unmeasured excitations, is today a viable alternative to more traditional modal identification approaches based on processing sets of measured FRFs or impulse responses. Among current techniques for performing operational modal identification, the so-called blind identification methods are the subject of considerable investigation. In particular, the SOBI (Second-Order Blind Identification) method was found to be quite efficient. SOBI was originally developed for systems with normal modes. To address systems with complex modes, various extension approaches have been proposed, in particular: (a) Using a first-order state-space formulation for the system dynamics; (b) Building complex analytic signals from the measured responses using the Hilbert transform. In this paper we further explore the latter option, which is conceptually interesting while preserving the model order and size. Focus is on applicability of the SOBI technique for extracting the modal responses from analytic signals built from a set of vibratory responses. The novelty of this work is to propose a straightforward computational procedure for obtaining the complex cross-correlation response matrix to be used for the modal identification procedure. After clarifying subtle aspects of the general theoretical framework, we demonstrate that the correlation matrix of the analytic responses can be computed through a Hilbert transform of the real correlation matrix, so that the actual time-domain responses are no longer required for modal identification purposes. The numerical validation of the proposed technique is presented based on time-domain simulations of a conceptual physical multi-modal system, designed to display modes ranging from normal to highly complex, while keeping modal damping low and nearly independent of the modal complexity, and which can prove very interesting in test bench applications. Numerical results for complex modal identifications are presented, and the quality of the identified modal matrix and modal responses, extracted using the complex SOBI technique and implementing the proposed formulation, is assessed.
NASA Astrophysics Data System (ADS)
Gupta, Nikhil; Paramsothy, Muralidharan
2014-06-01
The special topic "Metal- and Polymer-Matrix Composites" is intended to capture the state of the art in the research and practice of functional composites. The current set of articles related to metal-matrix composites includes reviews on functionalities such as self-healing, self-lubricating, and self-cleaning capabilities; research results on a variety of aluminum-matrix composites; and investigations on advanced composites manufacturing methods. In addition, the processing and properties of carbon nanotube-reinforced polymer-matrix composites and adhesive bonding of laminated composites are discussed. The literature on functional metal-matrix composites is relatively scarce compared to functional polymer-matrix composites. The demand for lightweight composites in the transportation sector is fueling the rapid development in this field, which is captured in the current set of articles. The possibility of simultaneously tailoring several desired properties is attractive but very challenging, and it requires significant advancements in the science and technology of composite materials. The progress captured in the current set of articles shows promise for developing materials that seem capable of moving this field from laboratory-scale prototypes to actual industrial applications.
Turbine component, turbine blade, and turbine component fabrication process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delvaux, John McConnell; Cairo, Ronald Ralph; Parolini, Jason Robert
A turbine component, a turbine blade, and a turbine component fabrication process are disclosed. The turbine component includes ceramic matrix composite plies and a feature configured for preventing interlaminar tension of the ceramic matrix composite plies. The feature is selected from the group consisting of ceramic matrix composite tows or precast insert tows extending through at least a portion of the ceramic matrix composite plies, a woven fabric having fiber tows or a precast insert preventing contact between a first set of the ceramic matrix composite plies and a second set of the ceramic matrix composite plies, and combinations thereof.more » The process includes laying up ceramic matrix composite plies in a preselected arrangement and securing a feature configured for interlaminar tension.« less
Matrix method for acoustic levitation simulation.
Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C
2011-08-01
A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.
NASA Astrophysics Data System (ADS)
Castellano, Claudio; Pastor-Satorras, Romualdo
2017-10-01
The largest eigenvalue of a network's adjacency matrix and its associated principal eigenvector are key elements for determining the topological structure and the properties of dynamical processes mediated by it. We present a physically grounded expression relating the value of the largest eigenvalue of a given network to the largest eigenvalue of two network subgraphs, considered as isolated: the hub with its immediate neighbors and the densely connected set of nodes with maximum K -core index. We validate this formula by showing that it predicts, with good accuracy, the largest eigenvalue of a large set of synthetic and real-world topologies. We also present evidence of the consequences of these findings for broad classes of dynamics taking place on the networks. As a by-product, we reveal that the spectral properties of heterogeneous networks built according to the linear preferential attachment model are qualitatively different from those of their static counterparts.
Bartke, Stephan; Hagemann, Nina; Harries, Nicola; Hauck, Jennifer; Bardos, Paul
2018-04-01
A deliberate expert-based scenario approach is applied to better understand the likely determinants of the evolution of the market for nanoparticles use in remediation in Europe until 2025. An initial set of factors had been obtained from a literature review and was complemented by a workshop and key-informant interviews. In further expert engaging formats - focus groups, workshops, conferences, surveys - this initial set of factors was condensed and engaged experts scored the factors regarding their importance for being likely to influence the market development. An interaction matrix was obtained identifying the factors being most active in shaping the market development in Europe by 2025, namely "Science-Policy-Interface" and "Validated information on nanoparticle application potential". Based on these, potential scenarios were determined and development of factors discussed. Conclusions are offered on achievable interventions to enhance nanoremediation deployment. Copyright © 2017 Elsevier B.V. All rights reserved.
Principal component analysis for designed experiments.
Konishi, Tomokazu
2015-01-01
Principal component analysis is used to summarize matrix data, such as found in transcriptome, proteome or metabolome and medical examinations, into fewer dimensions by fitting the matrix to orthogonal axes. Although this methodology is frequently used in multivariate analyses, it has disadvantages when applied to experimental data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the data set. Second, the method is sensitive to experimental noise and bias between sample groups. It cannot reflect the experimental design that is planned to manage the noise and bias; rather, it estimates the same weight and independence to all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. First, the principal axes were identified using training data sets and shared across experiments. These training data reflect the design of experiments, and their preparation allows noise to be reduced and group bias to be removed. Second, the center of the rotation was determined in accordance with the experimental design. Third, the resulting components were scaled to unify their size unit. The effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. The range of scaled scores was unaffected by the number of items. Additionally, unknown samples were appropriately classified using pre-arranged axes. Furthermore, these axes well reflected the characteristics of groups in the experiments. As was observed, the scaling of the components and sharing of axes enabled comparisons of the components beyond experiments. The use of training data reduced the effects of noise and bias in the data, facilitating the physical interpretation of the principal axes. Together, these introduced options result in improved generality and objectivity of the analytical results. The methodology has thus become more like a set of multiple regression analyses that find independent models that specify each of the axes.
Batch Effect Confounding Leads to Strong Bias in Performance Estimates Obtained by Cross-Validation
Delorenzi, Mauro
2014-01-01
Background With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences (“batch effects”) as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. Focus The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. Data We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., ‘control’) or group 2 (e.g., ‘treated’). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. Methods We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data. PMID:24967636
Dolch, Michael E; Janitza, Silke; Boulesteix, Anne-Laure; Graßmann-Lichtenauer, Carola; Praun, Siegfried; Denzer, Wolfgang; Schelling, Gustav; Schubert, Sören
2016-12-01
Identification of microorganisms in positive blood cultures still relies on standard techniques such as Gram staining followed by culturing with definite microorganism identification. Alternatively, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry or the analysis of headspace volatile compound (VC) composition produced by cultures can help to differentiate between microorganisms under experimental conditions. This study assessed the efficacy of volatile compound based microorganism differentiation into Gram-negatives and -positives in unselected positive blood culture samples from patients. Headspace gas samples of positive blood culture samples were transferred to sterilized, sealed, and evacuated 20 ml glass vials and stored at -30 °C until batch analysis. Headspace gas VC content analysis was carried out via an auto sampler connected to an ion-molecule reaction mass spectrometer (IMR-MS). Measurements covered a mass range from 16 to 135 u including CO2, H2, N2, and O2. Prediction rules for microorganism identification based on VC composition were derived using a training data set and evaluated using a validation data set within a random split validation procedure. One-hundred-fifty-two aerobic samples growing 27 Gram-negatives, 106 Gram-positives, and 19 fungi and 130 anaerobic samples growing 37 Gram-negatives, 91 Gram-positives, and two fungi were analysed. In anaerobic samples, ten discriminators were identified by the random forest method allowing for bacteria differentiation into Gram-negative and -positive (error rate: 16.7 % in validation data set). For aerobic samples the error rate was not better than random. In anaerobic blood culture samples of patients IMR-MS based headspace VC composition analysis facilitates bacteria differentiation into Gram-negative and -positive.
Acoustical characterization and parameter optimization of polymeric noise control materials
NASA Astrophysics Data System (ADS)
Homsi, Emile N.
2003-10-01
The sound transmission loss (STL) characteristics of polymer-based materials are considered. Analytical models that predict, characterize and optimize the STL of polymeric materials, with respect to physical parameters that affect performance, are developed for single layer panel configuration and adapted for layered panel construction with homogenous core. An optimum set of material parameters is selected and translated into practical applications for validation. Sound attenuating thermoplastic materials designed to be used as barrier systems in the automotive and consumer industries have certain acoustical characteristics that vary in function of the stiffness and density of the selected material. The validity and applicability of existing theory is explored, and since STL is influenced by factors such as the surface mass density of the panel's material, a method is modified to improve STL performance and optimize load-bearing attributes. An experimentally derived function is applied to the model for better correlation. In-phase and out-of-phase motion of top and bottom layers are considered. It was found that the layered construction of the co-injection type would exhibit fused planes at the interface and move in-phase. The model for the single layer case is adapted to the layered case where it would behave as a single panel. Primary physical parameters that affect STL are identified and manipulated. Theoretical analysis is linked to the resin's matrix attribute. High STL material with representative characteristics is evaluated versus standard resins. It was found that high STL could be achieved by altering materials' matrix and by integrating design solution in the low frequency range. A suggested numerical approach is described for STL evaluation of simple and complex geometries. In practice, validation on actual vehicle systems proved the adequacy of the acoustical characterization process.
Costi, Esther María; Sicilia, María Dolores; Rubio, Soledad
2010-10-01
A multiresidue method was described for determining eight sulfonamides, SAs (sulfadiazine, sulfamerazine, sulfamethoxypyridazine, sulfachloropyridazine, sulfadoxine, sulfamethoxazole, sulfadimethoxine and sulfaquinoxaline) in animal muscle tissues (pork, chicken, turkey, lamb and beef) at concentrations below the maximum residue limit (100 μg kg(-1)) set by the European Commission. The method was based on the microextraction of SAs in 300-mg muscle samples with 1 mL of a supramolecular solvent made up of reverse micelles of decanoic acid (DeA) and posterior determination of SAs in the extract by LC/fluorescence detection, after in situ derivatization with fluorescamine. Recoveries were quantitative (98-109%) and matrix-independent, no concentration of the extracts was required, the microextraction took about 30 min and several samples could be simultaneously treated. Formation of multiple hydrogen bonds between the carboxylic groups of the solvent and the target SAs (hydrogen donor and acceptor sum between 9 and 11) were considered as the major forces driving microextraction. The method was validated according to the European Union regulation 2002/657/EC. Analytical performance in terms of linearity, selectivity, trueness, precision, stability of SAs, decision limit and detection capability were determined. Quantitation limits for the different SAs ranged between 12 μg kg(-1) and 44 μg kg(-1), they being nearly independent of matrix composition. Repeatability and reproducibility, expressed as relative standard deviation, were in the ranges 1.8-3.6% and 3.3-6.1%. The results of the validation process proved that the method is suitable for determining sulfonamide residues in surveillance programs. Copyright © 2010 Elsevier B.V. All rights reserved.
Wyatt, Mark F; Havard, Stephen; Stein, Bridget K; Brenton, A Gareth
2008-01-01
Transition-metal acetylacetonate complexes of the form Metal(acac)(2), where Metal = Fe(II), Co(II), Ni(II), Cu(II), and Zn(II), and Metal(acac)(3), where Metal = V(III), Cr(III), Mn(III), Fe(III), and Co(III), were investigated by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS). The data was acquired using the aprotic, electron transfer matrix, 2-[(2E)-3-(4-tert-butylphenyl)-2-methylprop-2-enylidene]malononitrile (DCTB), and the observation of positive radical ions is shown clearly to depend on the metal element and the oxidation state it occupies. The ionization energy of DCTB was calculated to be 8.08 eV by density functional theory methods, which is notably lower than the experimental value, but within the range of other computational values. This value is very close to those of the analytes, so the existing electron transfer mechanism which is based on the ionization energies of the matrix and analyte, cannot be used predictively. Similarly, the data neither proves nor disproves the validity of the existing electron transfer ionization mechanism, with respect to metal coordination complexes without strong chromophores. In this case, periodic trends may be more useful in explaining the observed species and the prediction of species from sets of similar complexes. The addition of a sodium salt benefits the MALDI-TOFMS characterization of certain compounds studied, but the benefit of the addition of ammonium or silver salts is negligible.
Evolution of In-Situ Generated Reinforcement Precipitates in Metal Matrix Composites
NASA Technical Reports Server (NTRS)
Sen, S.; Kar, S. K.; Catalina, A. V.; Stefanescu, D. M.; Dhindaw, B. K.
2004-01-01
Due to certain inherent advantages, in-situ production of Metal Matrix Composites (MMCs) have received considerable attention in the recent past. ln-situ techniques typically involve a chemical reaction that results in precipitation of a ceramic reinforcement phase. The size and spatial distribution of these precipitates ultimately determine the mechanical properties of these MMCs. In this paper we will investigate the validity of using classical growth laws and analytical expressions to describe the interaction between a precipitate and a solid-liquid interface (SLI) to predict the size and spatial evolution of the in-situ generated precipitates. Measurements made on size and distribution of Tic precipitates in a Ni&I matrix will be presented to test the validity of such an approach.
On the stiffness matrix of the intervertebral joint: application to total disk replacement.
O'Reilly, Oliver M; Metzger, Melodie F; Buckley, Jenni M; Moody, David A; Lotz, Jeffrey C
2009-08-01
The traditional method of establishing the stiffness matrix associated with an intervertebral joint is valid only for infinitesimal rotations, whereas the rotations featured in spinal motion are often finite. In the present paper, a new formulation of this stiffness matrix is presented, which is valid for finite rotations. This formulation uses Euler angles to parametrize the rotation, an associated basis, which is known as the dual Euler basis, to describe the moments, and it enables a characterization of the nonconservative nature of the joint caused by energy loss in the poroviscoelastic disk and ligamentous support structure. As an application of the formulation, the stiffness matrix of a motion segment is experimentally determined for the case of an intact intervertebral disk and compared with the matrices associated with the same segment after the insertion of a total disk replacement system. In this manner, the matrix is used to quantify the changes in the intervertebral kinetics associated with total disk replacements. As a result, this paper presents the first such characterization of the kinetics of a total disk replacement.
Superallowed nuclear beta decay: Precision measurements for basic physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, J. C.
2012-11-20
For 60 years, superallowed 0{sup +}{yields}0{sup +} nuclear beta decay has been used to probe the weak interaction, currently verifying the conservation of the vector current (CVC) to high precision ({+-}0.01%) and anchoring the most demanding available test of the unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix ({+-}0.06%), a fundamental pillar of the electroweak standard model. Each superallowed transition is characterized by its ft-value, a result obtained from three measured quantities: the total decay energy of the transition, its branching ratio, and the half-life of the parent state. Today's data set is composed of some 150 independent measurements of 13 separatemore » superallowed transitions covering a wide range of parent nuclei from {sup 10}C to {sup 74}Rb. Excellent consistency among the average results for all 13 transitions - a prediction of CVC - also confirms the validity of the small transition-dependent theoretical corrections that have been applied to account for isospin symmetry breaking. With CVC consistency established, the value of the vector coupling constant, G{sub V}, has been extracted from the data and used to determine the top left element of the CKM matrix, V{sub ud}. With this result the top-row unitarity test of the CKM matrix yields the value 0.99995(61), a result that sets a tight limit on possible new physics beyond the standard model. To have any impact on these fundamental weak-interaction tests, any measurement must be made with a precision of 0.1% or better - a substantial experimental challenge well beyond the requirements of most nuclear physics measurements. I overview the current state of the field and outline some of the requirements that need to be met by experimentalists if they aim to make measurements with this high level of precision.« less
Schmidt, Kathrin S; Mankertz, Joachim
2018-06-01
A sensitive and robust LC-MS/MS method allowing the rapid screening and confirmation of selective androgen receptor modulators in bovine urine was developed and successfully validated according to Commission Decision 2002/657/EC, chapter 3.1.3 'alternative validation', by applying a matrix-comprehensive in-house validation concept. The confirmation of the analytes in the validation samples was achieved both on the basis of the MRM ion ratios as laid down in Commission Decision 2002/657/EC and by comparison of their enhanced product ion (EPI) spectra with a reference mass spectral library by making use of the QTRAP technology. Here, in addition to the MRM survey scan, EPI spectra were generated in a data-dependent way according to an information-dependent acquisition criterion. Moreover, stability studies of the analytes in solution and in matrix according to an isochronous approach proved the stability of the analytes in solution and in matrix for at least the duration of the validation study. To identify factors that have a significant influence on the test method in routine analysis, a factorial effect analysis was performed. To this end, factors considered to be relevant for the method in routine analysis (e.g. operator, storage duration of the extracts before measurement, different cartridge lots and different hydrolysis conditions) were systematically varied on two levels. The examination of the extent to which these factors influence the measurement results of the individual analytes showed that none of the validation factors exerts a significant influence on the measurement results.
Wieghaus, Kristen A; Gianchandani, Erwin P; Neal, Rebekah A; Paige, Mikell A; Brown, Milton L; Papin, Jason A; Botchwey, Edward A
2009-07-01
We are creating synthetic pharmaceuticals with angiogenic activity and potential to promote vascular invasion. We previously demonstrated that one of these molecules, phthalimide neovascular factor 1 (PNF1), significantly expands microvascular networks in vivo following sustained release from poly(lactic-co-glycolic acid) (PLAGA) films. In addition, to probe PNF1 mode of action, we recently applied a novel pathway-based compendium analysis to a multi-timepoint, controlled microarray data set of PNF1-treated (vs. control) human microvascular endothelial cells (HMVECs), and we identified induction of tumor necrosis factor-alpha (TNF-alpha) and, subsequently, transforming growth factor-beta (TGF-beta) signaling networks by PNF1. Here we validate this microarray data set with quantitative real-time polymerase chain reaction (RT-PCR) analysis. Subsequently, we probe this data set and identify three specific TGF-beta-induced genes with regulation by PNF1 conserved over multiple timepoints-amyloid beta (A4) precursor protein (APP), early growth response 1 (EGR-1), and matrix metalloproteinase 14 (MMP14 or MT1-MMP)-that are also implicated in angiogenesis. We further focus on MMP14 given its unique role in angiogenesis, and we validate MT1-MMP modulation by PNF1 with an in vitro fluorescence assay that demonstrates the direct effects that PNF1 exerts on functional metalloproteinase activity. We also utilize endothelial cord formation in collagen gels to show that PNF1-induced stimulation of endothelial cord network formation in vitro is in some way MT1-MMP-dependent. Ultimately, this new network analysis of our transcriptional footprint characterizing PNF1 activity 1-48 h post-supplementation in HMVECs coupled with corresponding validating experiments suggests a key set of a few specific targets that are involved in PNF1 mode of action and important for successful promotion of the neovascularization that we have observed by the drug in vivo.
Cole-Cole broadening in dielectric relaxation and strange kinetics.
Puzenko, Alexander; Ishai, Paul Ben; Feldman, Yuri
2010-07-16
We present a fresh appraisal of the Cole-Cole (CC) description of dielectric relaxation. While the approach is phenomenological, it demonstrates a fundamental connection between the parameters of the CC dispersion. Based on the fractal nature of the time set representing the interaction of the relaxing dipole with its encompassing matrix, and the Kirkwood-Froehlich correlation factor, a new 3D phase space linking together the kinetic and structural properties is proposed. The evolution of the relaxation process is represented in this phase space by a trajectory, which is determined by the variation of external macroscopic parameters. As an example, the validity of the approach is demonstrated on two porous silica glasses exhibiting a CC relaxation process.
Identification of trombospondin-1 as a novel amelogenin interactor by functional proteomics.
NASA Astrophysics Data System (ADS)
Capolupo, Angela; Cassiano, Chiara; Casapullo, Agostino; Andreotti, Giuseppina; Cubellis, Maria V.; Riccio, Andrea; Riccio, Raffaele; Monti, Maria C.
2017-10-01
Amelogenins are a set of low molecular-weight enamel proteins belonging to a group of extracellular matrix (ECM) proteins with a key role in tooth enamel development and in other regeneration processes, such as wound healing and angiogenesis. Since only few data are actually available to unravel amelogenin mechanism of action in chronic skin healing restoration, we moved to the full characterization of the human amelogenin isoform 2 interactome in the secretome and lysate of Human Umbilical Vein Endothelial cells (HUVEC), using a functional proteomic approach. Trombospondin-1 has been identified as a novel and interesting partner of human amelogenin isoform 2 and their direct binding has been validated thought biophysical orthogonal approaches.
Context-sensitive autoassociative memories as expert systems in medical diagnosis
Pomi, Andrés; Olivera, Fernando
2006-01-01
Background The complexity of our contemporary medical practice has impelled the development of different decision-support aids based on artificial intelligence and neural networks. Distributed associative memories are neural network models that fit perfectly well to the vision of cognition emerging from current neurosciences. Methods We present the context-dependent autoassociative memory model. The sets of diseases and symptoms are mapped onto a pair of basis of orthogonal vectors. A matrix memory stores the associations between the signs and symptoms, and their corresponding diseases. A minimal numerical example is presented to show how to instruct the memory and how the system works. In order to provide a quick appreciation of the validity of the model and its potential clinical relevance we implemented an application with real data. A memory was trained with published data of neonates with suspected late-onset sepsis in a neonatal intensive care unit (NICU). A set of personal clinical observations was used as a test set to evaluate the capacity of the model to discriminate between septic and non-septic neonates on the basis of clinical and laboratory findings. Results We show here that matrix memory models with associations modulated by context can perform automatic medical diagnosis. The sequential availability of new information over time makes the system progress in a narrowing process that reduces the range of diagnostic possibilities. At each step the system provides a probabilistic map of the different possible diagnoses to that moment. The system can incorporate the clinical experience, building in that way a representative database of historical data that captures geo-demographical differences between patient populations. The trained model succeeds in diagnosing late-onset sepsis within the test set of infants in the NICU: sensitivity 100%; specificity 80%; percentage of true positives 91%; percentage of true negatives 100%; accuracy (true positives plus true negatives over the totality of patients) 93,3%; and Cohen's kappa index 0,84. Conclusion Context-dependent associative memories can operate as medical expert systems. The model is presented in a simple and tutorial way to encourage straightforward implementations by medical groups. An application with real data, presented as a primary evaluation of the validity and potentiality of the model in medical diagnosis, shows that the model is a highly promising alternative in the development of accuracy diagnostic tools. PMID:17121675
Wille, Sarah M R; Di Fazio, Vincent; Ramírez-Fernandez, Maria del Mar; Kummer, Natalie; Samyn, Nele
2013-02-01
"Driving under the influence of drugs" (DUID) has a large impact on the worldwide mortality risk. Therefore, DUID legislations based on impairment or analytical limits are adopted. Drug detection in oral fluid is of interest due to the ease of sampling during roadside controls. The prevalence of Δ9-tetrahydrocannabinol (THC) in seriously injured drivers ranges from 0.5% to 7.6% in Europe. For these reasons, the quantification of THC in oral fluid collected with 3 alternative on-site collectors is presented and discussed in this publication. An ultra-performance liquid chromatography-mass spectrometric quantification method for THC in oral fluid samples collected with the StatSure (Diagnostic Systems), Quantisal (Immunalysis), and Certus (Concateno) devices was validated according to the international guidelines. Small sample volumes of 100-200 μL were extracted using hexane. Special attention was paid to factors such as matrix effects, THC adsorption onto the collector, and stability in the collection fluid. A relatively high-throughput analysis was developed and validated according to ISO 17025 requirements. Although the effects of the matrix on the quantification could be minimized using a deuterated internal standard, and stability was acceptable according the validation data, adsorption of THC onto the collectors was a problem. For the StatSure device, THC was totally recovered from the collector pad after storage for 24 hours at room temperature or 7 days at 4°C. A loss of 15%-25% was observed for the Quantisal collector, whereas the recovery from the Certus device was irreproducible (relative standard deviation, 44%-85%) and low (29%-80%). During the roadside setting, a practical problem arose: small volumes of oral fluid (eg, 300 μL) were collected. However, THC was easily detected and concentrations ranged from 8 to 922 ng/mL in neat oral fluid. A relatively high-throughput analysis (40 samples in 4 hours) adapted for routine DUID analysis was developed and validated for THC quantification in oral fluid samples collected from drivers under the influence of cannabis.
Urinary Collagen Fragments Are Significantly Altered in Diabetes: A Link to Pathophysiology
Argilés, Àngel; Cerna, Marie; Delles, Christian; Dominiczak, Anna F.; Gayrard, Nathalie; Iphöfer, Alexander; Jänsch, Lothar; Jerums, George; Medek, Karel; Mischak, Harald; Navis, Gerjan J.; Roob, Johannes M.; Rossing, Kasper; Rossing, Peter; Rychlík, Ivan; Schiffer, Eric; Schmieder, Roland E.; Wascher, Thomas C.; Winklhofer-Roob, Brigitte M.; Zimmerli, Lukas U.; Zürbig, Petra; Snell-Bergeon, Janet K.
2010-01-01
Background The pathogenesis of diabetes mellitus (DM) is variable, comprising different inflammatory and immune responses. Proteome analysis holds the promise of delivering insight into the pathophysiological changes associated with diabetes. Recently, we identified and validated urinary proteomics biomarkers for diabetes. Based on these initial findings, we aimed to further validate urinary proteomics biomarkers specific for diabetes in general, and particularity associated with either type 1 (T1D) or type 2 diabetes (T2D). Methodology/Principal Findings Therefore, the low-molecular-weight urinary proteome of 902 subjects from 10 different centers, 315 controls and 587 patients with T1D (n = 299) or T2D (n = 288), was analyzed using capillary-electrophoresis mass-spectrometry. The 261 urinary biomarkers (100 were sequenced) previously discovered in 205 subjects were validated in an additional 697 subjects to distinguish DM subjects (n = 382) from control subjects (n = 315) with 94% (95% CI: 92–95) accuracy in this study. To identify biomarkers that differentiate T1D from T2D, a subset of normoalbuminuric patients with T1D (n = 68) and T2D (n = 42) was employed, enabling identification of 131 biomarker candidates (40 were sequenced) differentially regulated between T1D and T2D. These biomarkers distinguished T1D from T2D in an independent validation set of normoalbuminuric patients (n = 108) with 88% (95% CI: 81–94%) accuracy, and in patients with impaired renal function (n = 369) with 85% (95% CI: 81–88%) accuracy. Specific collagen fragments were associated with diabetes and type of diabetes indicating changes in collagen turnover and extracellular matrix as one hallmark of the molecular pathophysiology of diabetes. Additional biomarkers including inflammatory processes and pro-thrombotic alterations were observed. Conclusions/Significance These findings, based on the largest proteomic study performed to date on subjects with DM, validate the previously described biomarkers for DM, and pinpoint differences in the urinary proteome of T1D and T2D, indicating significant differences in extracellular matrix remodeling. PMID:20927192
DTI segmentation by statistical surface evolution.
Lenglet, Christophe; Rousson, Mikaël; Deriche, Rachid
2006-06-01
We address the problem of the segmentation of cerebral white matter structures from diffusion tensor images (DTI). A DTI produces, from a set of diffusion-weighted MR images, tensor-valued images where each voxel is assigned with a 3 x 3 symmetric, positive-definite matrix. This second order tensor is simply the covariance matrix of a local Gaussian process, with zero-mean, modeling the average motion of water molecules. As we will show in this paper, the definition of a dissimilarity measure and statistics between such quantities is a nontrivial task which must be tackled carefully. We claim and demonstrate that, by using the theoretically well-founded differential geometrical properties of the manifold of multivariate normal distributions, it is possible to improve the quality of the segmentation results obtained with other dissimilarity measures such as the Euclidean distance or the Kullback-Leibler divergence. The main goal of this paper is to prove that the choice of the probability metric, i.e., the dissimilarity measure, has a deep impact on the tensor statistics and, hence, on the achieved results. We introduce a variational formulation, in the level-set framework, to estimate the optimal segmentation of a DTI according to the following hypothesis: Diffusion tensors exhibit a Gaussian distribution in the different partitions. We must also respect the geometric constraints imposed by the interfaces existing among the cerebral structures and detected by the gradient of the DTI. We show how to express all the statistical quantities for the different probability metrics. We validate and compare the results obtained on various synthetic data-sets, a biological rat spinal cord phantom and human brain DTIs.
Closed-form integrator for the quaternion (euler angle) kinematics equations
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A. (Inventor)
2000-01-01
The invention is embodied in a method of integrating kinematics equations for updating a set of vehicle attitude angles of a vehicle using 3-dimensional angular velocities of the vehicle, which includes computing an integrating factor matrix from quantities corresponding to the 3-dimensional angular velocities, computing a total integrated angular rate from the quantities corresponding to a 3-dimensional angular velocities, computing a state transition matrix as a sum of (a) a first complementary function of the total integrated angular rate and (b) the integrating factor matrix multiplied by a second complementary function of the total integrated angular rate, and updating the set of vehicle attitude angles using the state transition matrix. Preferably, the method further includes computing a quanternion vector from the quantities corresponding to the 3-dimensional angular velocities, in which case the updating of the set of vehicle attitude angles using the state transition matrix is carried out by (a) updating the quanternion vector by multiplying the quanternion vector by the state transition matrix to produce an updated quanternion vector and (b) computing an updated set of vehicle attitude angles from the updated quanternion vector. The first and second trigonometric functions are complementary, such as a sine and a cosine. The quantities corresponding to the 3-dimensional angular velocities include respective averages of the 3-dimensional angular velocities over plural time frames. The updating of the quanternion vector preserves the norm of the vector, whereby the updated set of vehicle attitude angles are virtually error-free.
Guild, Georgia E.; Stangoulis, James C. R.
2016-01-01
Within the HarvestPlus program there are many collaborators currently using X-Ray Fluorescence (XRF) spectroscopy to measure Fe and Zn in their target crops. In India, five HarvestPlus wheat collaborators have laboratories that conduct this analysis and their throughput has increased significantly. The benefits of using XRF are its ease of use, minimal sample preparation and high throughput analysis. The lack of commercially available calibration standards has led to a need for alternative calibration arrangements for many of the instruments. Consequently, the majority of instruments have either been installed with an electronic transfer of an original grain calibration set developed by a preferred lab, or a locally supplied calibration. Unfortunately, neither of these methods has been entirely successful. The electronic transfer is unable to account for small variations between the instruments, whereas the use of a locally provided calibration set is heavily reliant on the accuracy of the reference analysis method, which is particularly difficult to achieve when analyzing low levels of micronutrient. Consequently, we have developed a calibration method that uses non-matrix matched glass disks. Here we present the validation of this method and show this calibration approach can improve the reproducibility and accuracy of whole grain wheat analysis on 5 different XRF instruments across the HarvestPlus breeding program. PMID:27375644
Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun
2016-02-09
Previously, we applied basic group theory and related concepts to scales of measurement of clinical disease states and clinical findings (including laboratory data). To gain a more concrete comprehension, we here apply the concept of matrix representation, which was not explicitly exploited in our previous work. Starting with a set of orthonormal vectors, called the basis, an operator Rj (an N-tuple patient disease state at the j-th session) was expressed as a set of stratified vectors representing plural operations on individual components, so as to satisfy the group matrix representation. The stratified vectors containing individual unit operations were combined into one-dimensional square matrices [Rj]s. The [Rj]s meet the matrix representation of a group (ring) as a K-algebra. Using the same-sized matrix of stratified vectors, we can also express changes in the plural set of [Rj]s. The method is demonstrated on simple examples. Despite the incompleteness of our model, the group matrix representation of stratified vectors offers a formal mathematical approach to clinical medicine, aligning it with other branches of natural science.
Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L
2012-10-01
Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.
Derivation of a formula for the resonance integral for a nonorthogonal basis set
Yim, Yung-Chang; Eyring, Henry
1981-01-01
In a self-consistent field calculation, a formula for the off-diagonal matrix elements of the core Hamiltonian is derived for a nonorthogonal basis set by a polyatomic approach. A set of parameters is then introduced for the repulsion integral formula of Mataga-Nishimoto to fit the experimental data. The matrix elements computed for the nonorthogonal basis set in the π-electron approximation are transformed to those for an orthogonal basis set by the Löwdin symmetrical orthogonalization. PMID:16593009
New inclusion sets for singular values.
He, Jun; Liu, Yan-Min; Tian, Jun-Kang; Ren, Ze-Rong
2017-01-01
In this paper, for a given matrix [Formula: see text], in terms of [Formula: see text] and [Formula: see text], where [Formula: see text], [Formula: see text], some new inclusion sets for singular values of the matrix are established. It is proved that the new inclusion sets are tighter than the Geršgorin-type sets (Qi in Linear Algebra Appl. 56:105-119, 1984) and the Brauer-type sets (Li in Comput. Math. Appl. 37:9-15, 1999). A numerical experiment shows the efficiency of our new results.
NASA Astrophysics Data System (ADS)
Zhang, Xing; Carter, Emily A.
2018-01-01
We revisit the static response function-based Kohn-Sham (KS) inversion procedure for determining the KS effective potential that corresponds to a given target electron density within finite atomic orbital basis sets. Instead of expanding the potential in an auxiliary basis set, we directly update the potential in its matrix representation. Through numerical examples, we show that the reconstructed density rapidly converges to the target density. Preliminary results are presented to illustrate the possibility of obtaining a local potential in real space from the optimized potential in its matrix representation. We have further applied this matrix-based KS inversion approach to density functional embedding theory. A proof-of-concept study of a solvated proton transfer reaction demonstrates the method's promise.
Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe
2014-05-01
The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy. Copyright © 2013 Elsevier Ltd. All rights reserved.
Zeric Stosic, Marina Z; Jaksic, Sandra M; Stojanov, Igor M; Apic, Jelena B; Ratajac, Radomir D
2016-11-01
High-performance liquid chromatography (HPLC) method with diode array detection (DAD) were optimized and validated for separation and determination of tetramethrin in an antiparasitic human shampoo. In order to optimize separation conditions, two different columns, different column oven temperatures, as well as mobile phase composition and ratio, were tested. Best separation was achieved on the Supelcosil TM LC-18- DB column (4.6 x 250 mm), particle size 5 jim, with mobile phase methanol : water (78 : 22, v/v) at a flow rate of 0.8 mL/min and at temperature of 30⁰C. The detection wavelength of the detector was set at 220 nm. Under the optimum chromatographic conditions, standard calibration curve was measured with good linearity [r2 = 0.9997]. Accuracy of the method defined as a mean recovery of tetramethrin from shampoo matrix was 100.09%. The advantages of this method are that it can easily be used for the routine analysis of drug tetramethrin in pharmaceutical formulas and in all pharmaceutical researches involving tetramethrin.
Fukui, Atsuko; Fujii, Ryuta; Yonezawa, Yorinobu; Sunada, Hisakazu
2002-11-01
The release properties of phenylpropanolamine hydrochloride (PPA) from ethylcellulose (EC, ethylcellulose 10 cps (EC#10) and/or 100 cps (EC#100)) matrix granules prepared by the extrusion granulation method were examined. The release process could be divided into two parts, and was well analyzed by applying square-root time law and cube root law equations, respectively. The validity of the treatments was confirmed by the fitness of the simulation curve with the measured curve. At the initial stage, PPA was released from the gel layer of swollen EC in the matrix granules. At the second stage, the drug existing below the gel layer dissolved, and was released through the gel layer. Also, the time and release ratio at the connection point of the simulation curves was examined to determine the validity of the analysis. Comparing the release properties of PPA from the two types of EC matrix granules, EC#100 showed more effective sustained release than EC#10. On the other hand, changes in the release property of the EC#10 matrix granule were relatively more clear than that of the EC#100 matrix granule. Thus, it was supposed that EC#10 is more available for controlled and sustained release formulations than EC#100.
Zhang, Mengliang; Harrington, Peter de B
2015-01-01
Multivariate partial least-squares (PLS) method was applied to the quantification of two complex polychlorinated biphenyls (PCBs) commercial mixtures, Aroclor 1254 and 1260, in a soil matrix. PCBs in soil samples were extracted by headspace solid phase microextraction (SPME) and determined by gas chromatography/mass spectrometry (GC/MS). Decachlorinated biphenyl (deca-CB) was used as internal standard. After the baseline correction was applied, four data representations including extracted ion chromatograms (EIC) for Aroclor 1254, EIC for Aroclor 1260, EIC for both Aroclors and two-way data sets were constructed for PLS-1 and PLS-2 calibrations and evaluated with respect to quantitative prediction accuracy. The PLS model was optimized with respect to the number of latent variables using cross validation of the calibration data set. The validation of the method was performed with certified soil samples and real field soil samples and the predicted concentrations for both Aroclors using EIC data sets agreed with the certified values. The linear range of the method was from 10μgkg(-1) to 1000μgkg(-1) for both Aroclor 1254 and 1260 in soil matrices and the detection limit was 4μgkg(-1) for Aroclor 1254 and 6μgkg(-1) for Aroclor 1260. This holistic approach for the determination of mixtures of complex samples has broad application to environmental forensics and modeling. Copyright © 2014 Elsevier Ltd. All rights reserved.
Akeroyd, Michael A; Arlinger, Stig; Bentler, Ruth A; Boothroyd, Arthur; Dillier, Norbert; Dreschler, Wouter A; Gagné, Jean-Pierre; Lutman, Mark; Wouters, Jan; Wong, Lena; Kollmeier, Birger
2015-01-01
To provide guidelines for the development of two types of closed-set speech-perception tests that can be applied and interpreted in the same way across languages. The guidelines cover the digit triplet and the matrix sentence tests that are most commonly used to test speech recognition in noise. They were developed by a working group on Multilingual Speech Tests of the International Collegium of Rehabilitative Audiology (ICRA). The recommendations are based on reviews of existing evaluations of the digit triplet and matrix tests as well as on the research experience of members of the ICRA Working Group. They represent the results of a consensus process. The resulting recommendations deal with: Test design and word selection; Talker characteristics; Audio recording and stimulus preparation; Masking noise; Test administration; and Test validation. By following these guidelines for the development of any new test of this kind, clinicians and researchers working in any language will be able to perform tests whose results can be compared and combined in cross-language studies.
Monaci, Linda; Brohée, Marcel; Tregoat, Virginie; van Hengel, Arjon
2011-07-15
Milk allergens are common allergens occurring in foods, therefore raising concern in allergic consumers. Enzyme-linked immunosorbent assay (ELISA) is, to date, the method of choice for the detection of food allergens by the food industry although, the performance of ELISA might be compromised when severe food processing techniques are applied to allergen-containing foods. In this paper we investigated the influence of baking time on the detection of milk allergens by using commercial ELISA kits. Baked cookies were chosen as a model food system and experiments were set up to study the impact of spiking a matrix food either before, or after the baking process. Results revealed clear analytical differences between both spiking methods, which stress the importance of choosing appropriate spiking methodologies for method validation purposes. Finally, since the narrow dynamic range of quantification of ELISA implies that dilution of samples is required, the impact of sample dilution on the quantitative results was investigated. All parameters investigated were shown to impact milk allergen detection by means of ELISA. Copyright © 2011 Elsevier Ltd. All rights reserved.
Exact recovery of sparse multiple measurement vectors by [Formula: see text]-minimization.
Wang, Changlong; Peng, Jigen
2018-01-01
The joint sparse recovery problem is a generalization of the single measurement vector problem widely studied in compressed sensing. It aims to recover a set of jointly sparse vectors, i.e., those that have nonzero entries concentrated at a common location. Meanwhile [Formula: see text]-minimization subject to matrixes is widely used in a large number of algorithms designed for this problem, i.e., [Formula: see text]-minimization [Formula: see text] Therefore the main contribution in this paper is two theoretical results about this technique. The first one is proving that in every multiple system of linear equations there exists a constant [Formula: see text] such that the original unique sparse solution also can be recovered from a minimization in [Formula: see text] quasi-norm subject to matrixes whenever [Formula: see text]. The other one is showing an analytic expression of such [Formula: see text]. Finally, we display the results of one example to confirm the validity of our conclusions, and we use some numerical experiments to show that we increase the efficiency of these algorithms designed for [Formula: see text]-minimization by using our results.
Bolormaa, Sunduimijid; Pryce, Jennie E.; Reverter, Antonio; Zhang, Yuandan; Barendse, William; Kemper, Kathryn; Tier, Bruce; Savin, Keith; Hayes, Ben J.; Goddard, Michael E.
2014-01-01
Polymorphisms that affect complex traits or quantitative trait loci (QTL) often affect multiple traits. We describe two novel methods (1) for finding single nucleotide polymorphisms (SNPs) significantly associated with one or more traits using a multi-trait, meta-analysis, and (2) for distinguishing between a single pleiotropic QTL and multiple linked QTL. The meta-analysis uses the effect of each SNP on each of n traits, estimated in single trait genome wide association studies (GWAS). These effects are expressed as a vector of signed t-values (t) and the error covariance matrix of these t values is approximated by the correlation matrix of t-values among the traits calculated across the SNP (V). Consequently, t'V−1t is approximately distributed as a chi-squared with n degrees of freedom. An attractive feature of the meta-analysis is that it uses estimated effects of SNPs from single trait GWAS, so it can be applied to published data where individual records are not available. We demonstrate that the multi-trait method can be used to increase the power (numbers of SNPs validated in an independent population) of GWAS in a beef cattle data set including 10,191 animals genotyped for 729,068 SNPs with 32 traits recorded, including growth and reproduction traits. We can distinguish between a single pleiotropic QTL and multiple linked QTL because multiple SNPs tagging the same QTL show the same pattern of effects across traits. We confirm this finding by demonstrating that when one SNP is included in the statistical model the other SNPs have a non-significant effect. In the beef cattle data set, cluster analysis yielded four groups of QTL with similar patterns of effects across traits within a group. A linear index was used to validate SNPs having effects on multiple traits and to identify additional SNPs belonging to these four groups. PMID:24675618
Le Ngoc Huyen, Tran; Queneudec T'kint, Michèle; Remond, Caroline; Chabbert, Brigitte; Dheilly, Rose-Marie
2011-11-01
Given the non competition of miscanthus with food and animal feed, this lignocellulosic species has attracted attention as a possible biofuel resource. However, sustainability of ethanol production from lignocelluloses biomass would imply reduction in the consumption of chemicals and/or energetic means, but also valorization of the lignocellulosic by-product remaining from enzymatic saccharification. Introduction of these by-products into a cementitious matrix could be used in manufacturing a lightweight composite. Miscanthus biomass was submitted to chemical pretreatments followed by saccharification using an enzymatic cocktail. Residues from saccharification were then mixed with a cementitious matrix. Given their mechanical properties and a good adherence between cement and by-product, the hardened materials could be used. However, the delay in the beginning of setting time is too long, which prevents the direct use of by-product into cementitious matrix. Preliminary experiments using a setting accelerator in the cementitious matrix permitted significant reduction in the setting time delay. Copyright © 2011 Académie des sciences. Published by Elsevier SAS. All rights reserved.
ERIC Educational Resources Information Center
Stevenson, Douglas K.
Recently there has been a renewed international interest in direct oral proficiency measures such as the oral interview. There has also been a growing awareness among some language testing specialists that all proficiency tests must be subjected to construct validation. It seems that the high face validity of oral interviews tends to cloud and…
Streliaeva, A V; Gasparian, E R; Polzikov, V V; Sagieva, A T; Lazareva, N B; Kurilov, D V; Chebyshev, N V; Sadykov, V M; Zuev, S S; Shcheglova, T A
2012-01-01
The investigation was undertaken to study the biology and ecology of Latrodectus, the possibilities of its importation to Russia from other countries, to breed Latrodectus in the laboratory setting, and to design the first homeopathic matrix of Latrodectus to manufacture homeopathic remedies. The authors were the first to devise a method for Latrodectus breeding in the laboratory setting of Moscow and its vicinities. The Latrodectus bred in the laboratory is suitable to manufacture drugs and in captivity they do not lose its biological activity. The authors were the first to prepare a homeopathic Latrodectus matrix for homeopathic medicines, by using the new Russian extragent petroleum. Chromatography mass spectrometry was used to identify more than a hundred chemical compounds in the Russian petroleum. The biological activity of the petroleum Latrodectus matrix for the manufacture of homeopathic remedies was highly competitive with that of the traditional Latrodectus venom matrix made using ethyl alcohol. The homeopathic Latrodectus matrix made using glycerol lost its biological activity because of glycerol. The biological activity of homeopathic matrixes made from Latrodectus inhabiting the USA, Uzbekistan, and the south of Russia and from that bred in the laboratory was studied. The homeopathic matrix made from the Latrodectus living in the Samarkand Region, Republic of Uzbekistan, has the highest biological activity.
Tiryaki, Osman
2016-10-02
This study was undertaken to validate the "quick, easy, cheap, effective, rugged and safe" (QuEChERS) method using Golden Delicious and Starking Delicious apple matrices spiked at 0.1 maximum residue limit (MRL), 1.0 MRL and 10 MRL levels of the four pesticides (chlorpyrifos, dimethoate, indoxacarb and imidacloprid). For the extraction and cleanup, original QuEChERS method was followed, then the samples were subjected to liquid chromatography-triple quadrupole mass spectrometry (LC-MS/MS) for chromatographic analyses. According to t test, matrix effect was not significant for chlorpyrifos in both sample matrices, but it was significant for dimethoate, indoxacarb and imidacloprid in both sample matrices. Thus, matrix-matched calibration (MC) was used to compensate matrix effect and quantifications were carried out by using MC. The overall recovery of the method was 90.15% with a relative standard deviation of 13.27% (n = 330). Estimated method detection limit of analytes blew the MRLs. Some other parameters of the method validation, such as recovery, precision, accuracy and linearity were found to be within the required ranges.
Mahmud, Ilias; Clarke, Lynda; Nahar, Nazmun; Ploubidis, George B
2018-05-02
Disability does not only depend on individuals' health conditions but also the contextual factors in which individuals live. Therefore, disability measurement scales need to be developed or adapted to the context. Bangladesh lacks any locally developed or validated scales to measure disabilities in adults with mobility impairment. We developed a new Locomotor Disability Scale (LDS) in a previous qualitative study. The present study developed a shorter version of the scale and explored its factorial structure. We administered the LDS to 316 adults with mobility impairments, selected from outpatient and community-based settings of a rehabilitation centre in Bangladesh. We did exploratory factor analysis (EFA) to determine a shorter version of the LDS and explore its factorial structure. We retained 19 items from the original LDS following evaluation of response rate, floor/ceiling effects, inter-item correlations, and factor loadings in EFA. The Eigenvalues greater than one rule and the Scree test suggested a two-factor model of measuring locomotor disability (LD) in adults with mobility impairment. These two factors are 'mobility activity limitations' and 'functional activity limitations'. We named the higher order factor as 'locomotor disability'. This two-factor model explained over 68% of the total variance among the LD indicators. The reproduced correlation matrix indicated a good model fit with 14% non-redundant residuals with absolute values > 0.05. However, the Chi-square test indicated poor model fit (p < .001). The Bartlett's test of Sphericity confirmed patterned relationships amongst the LD indicators (p < .001). The Kaiser-Meyer-Olkin Measure (KMO) of sampling adequacy was .94 and the individual diagonal elements in the anti-correlation matrix were > .91. Among the retained 19 items, there was no correlation coefficient > .9 or a large number of correlation coefficients < .3. The communalities were high: between .495 and .882 with a mean of 0.684. As an evidence of convergent validity, we had all loadings above .5, except one. As an evidence of discriminant validity, we had no strong (> .3) cross loadings and the correlation between the two factors was .657. The 'mobility activity limitations' and 'functional activity limitations' sub-scales demonstrated excellent internal consistency (Cronbach's alpha were .954 and .937, respectively). The 19-item LDS was found to be a reliable and valid scale to measure the latent constructs mobility activity limitations and functional activity limitations among adults with mobility impairments in outpatient and community-based settings in Bangladesh.
Majorization as a Tool for Optimizing a Class of Matrix Functions.
ERIC Educational Resources Information Center
Kiers, Henk A.
1990-01-01
General algorithms are presented that can be used for optimizing matrix trace functions subject to certain constraints on the parameters. The parameter set that minimizes the majorizing function also decreases the matrix trace function, providing a monotonically convergent algorithm for minimizing the matrix trace function iteratively. (SLD)
Trust in Leadership DEOCS 4.1 Construct Validity Summary
2017-08-01
Item Corrected Item- Total Correlation Cronbach’s Alpha if Item Deleted Four-point Scale Items I can depend on my immediate supervisor to meet...1974) were used to assess the fit between the data and the factor. The BTS hypothesizes that the correlation matrix is an identity matrix. The...to reject the null hypothesis that the correlation matrix is an identity, and to conclude that the factor analysis is an appropriate method to
ERIC Educational Resources Information Center
Anuar, Azad Athahiri; Rozubi, Norsayyidatina Che; Abdullah, Haslee Sharil
2015-01-01
The aims of this study were to develop and validate a MCC training module for trainee counselor based on MCC matrix model by Sue et al. (1992). This module encompassed five sub modules and 11 activities developed along the concepts and components of the MCC matrix model developed by Sue, Arredondo dan McDavis (1992). The design method used in this…
Ateshian, Gerard A.; Albro, Michael B.; Maas, Steve; Weiss, Jeffrey A.
2011-01-01
Biological soft tissues and cells may be subjected to mechanical as well as chemical (osmotic) loading under their natural physiological environment or various experimental conditions. The interaction of mechanical and chemical effects may be very significant under some of these conditions, yet the highly nonlinear nature of the set of governing equations describing these mechanisms poses a challenge for the modeling of such phenomena. This study formulated and implemented a finite element algorithm for analyzing mechanochemical events in neutral deformable porous media under finite deformation. The algorithm employed the framework of mixture theory to model the porous permeable solid matrix and interstitial fluid, where the fluid consists of a mixture of solvent and solute. A special emphasis was placed on solute-solid matrix interactions, such as solute exclusion from a fraction of the matrix pore space (solubility) and frictional momentum exchange that produces solute hindrance and pumping under certain dynamic loading conditions. The finite element formulation implemented full coupling of mechanical and chemical effects, providing a framework where material properties and response functions may depend on solid matrix strain as well as solute concentration. The implementation was validated using selected canonical problems for which analytical or alternative numerical solutions exist. This finite element code includes a number of unique features that enhance the modeling of mechanochemical phenomena in biological tissues. The code is available in the public domain, open source finite element program FEBio (http://mrl.sci.utah.edu/software). PMID:21950898
Pham, T. Anh; Nguyen, Huy -Viet; Rocca, Dario; ...
2013-04-26
Inmore » a recent paper we presented an approach to evaluate quasiparticle energies based on the spectral decomposition of the static dielectric matrix. This method does not require the calculation of unoccupied electronic states or the direct diagonalization of large dielectric matrices, and it avoids the use of plasmon-pole models. The numerical accuracy of the approach is controlled by a single parameter, i.e., the number of eigenvectors used in the spectral decomposition of the dielectric matrix. Here we present a comprehensive validation of the method, encompassing calculations of ionization potentials and electron affinities of various molecules and of band gaps for several crystalline and disordered semiconductors. Lastly, we demonstrate the efficiency of our approach by carrying out G W calculations for systems with several hundred valence electrons.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Lilienfeld, O. Anatole; Ramakrishnan, Raghunathan; Rupp, Matthias
We introduce a fingerprint representation of molecules based on a Fourier series of atomic radial distribution functions. This fingerprint is unique (except for chirality), continuous, and differentiable with respect to atomic coordinates and nuclear charges. It is invariant with respect to translation, rotation, and nuclear permutation, and requires no preconceived knowledge about chemical bonding, topology, or electronic orbitals. As such, it meets many important criteria for a good molecular representation, suggesting its usefulness for machine learning models of molecular properties trained across chemical compound space. To assess the performance of this new descriptor, we have trained machine learning models ofmore » molecular enthalpies of atomization for training sets with up to 10 k organic molecules, drawn at random from a published set of 134 k organic molecules with an average atomization enthalpy of over 1770 kcal/mol. We validate the descriptor on all remaining molecules of the 134 k set. For a training set of 10 k molecules, the fingerprint descriptor achieves a mean absolute error of 8.0 kcal/mol. This is slightly worse than the performance attained using the Coulomb matrix, another popular alternative, reaching 6.2 kcal/mol for the same training and test sets. (c) 2015 Wiley Periodicals, Inc.« less
Kalid, Naser; Zaidan, A A; Zaidan, B B; Salman, Omar H; Hashim, M; Albahri, O S; Albahri, A S
2018-03-02
This paper presents a new approach to prioritize "Large-scale Data" of patients with chronic heart diseases by using body sensors and communication technology during disasters and peak seasons. An evaluation matrix is used for emergency evaluation and large-scale data scoring of patients with chronic heart diseases in telemedicine environment. However, one major problem in the emergency evaluation of these patients is establishing a reasonable threshold for patients with the most and least critical conditions. This threshold can be used to detect the highest and lowest priority levels when all the scores of patients are identical during disasters and peak seasons. A practical study was performed on 500 patients with chronic heart diseases and different symptoms, and their emergency levels were evaluated based on four main measurements: electrocardiogram, oxygen saturation sensor, blood pressure monitoring, and non-sensory measurement tool, namely, text frame. Data alignment was conducted for the raw data and decision-making matrix by converting each extracted feature into an integer. This integer represents their state in the triage level based on medical guidelines to determine the features from different sources in a platform. The patients were then scored based on a decision matrix by using multi-criteria decision-making techniques, namely, integrated multi-layer for analytic hierarchy process (MLAHP) and technique for order performance by similarity to ideal solution (TOPSIS). For subjective validation, cardiologists were consulted to confirm the ranking results. For objective validation, mean ± standard deviation was computed to check the accuracy of the systematic ranking. This study provides scenarios and checklist benchmarking to evaluate the proposed and existing prioritization methods. Experimental results revealed the following. (1) The integration of TOPSIS and MLAHP effectively and systematically solved the patient settings on triage and prioritization problems. (2) In subjective validation, the first five patients assigned to the doctors were the most urgent cases that required the highest priority, whereas the last five patients were the least urgent cases and were given the lowest priority. In objective validation, scores significantly differed between the groups, indicating that the ranking results were identical. (3) For the first, second, and third scenarios, the proposed method exhibited an advantage over the benchmark method with percentages of 40%, 60%, and 100%, respectively. In conclusion, patients with the most and least urgent cases received the highest and lowest priority levels, respectively.
NASA Astrophysics Data System (ADS)
Xianqiang, He; Delu, Pan; Yan, Bai; Qiankun, Zhu
2005-10-01
The numerical model of the vector radiative transfer of the coupled ocean-atmosphere system is developed based on the matrix-operator method, which is named PCOART. In PCOART, using the Fourier analysis, the vector radiative transfer equation (VRTE) splits up into a set of independent equations with zenith angle as only angular coordinate. Using the Gaussian-Quadrature method, VRTE is finally transferred into the matrix equation, which is calculated by using the adding-doubling method. According to the reflective and refractive properties of the ocean-atmosphere interface, the vector radiative transfer numerical model of ocean and atmosphere is coupled in PCOART. By comparing with the exact Rayleigh scattering look-up-table of MODIS(Moderate-resolution Imaging Spectroradiometer), it is shown that PCOART is an exact numerical calculation model, and the processing methods of the multi-scattering and polarization are correct in PCOART. Also, by validating with the standard problems of the radiative transfer in water, it is shown that PCOART could be used to calculate the underwater radiative transfer problems. Therefore, PCOART is a useful tool to exactly calculate the vector radiative transfer of the coupled ocean-atmosphere system, which can be used to study the polarization properties of the radiance in the whole ocean-atmosphere system and the remote sensing of the atmosphere and ocean.
Randles, Michael J.; Woolf, Adrian S.; Huang, Jennifer L.; Byron, Adam; Humphries, Jonathan D.; Price, Karen L.; Kolatsi-Joannou, Maria; Collinson, Sophie; Denny, Thomas; Knight, David; Mironov, Aleksandr; Starborg, Toby; Korstanje, Ron; Humphries, Martin J.; Long, David A.
2015-01-01
Glomerular disease often features altered histologic patterns of extracellular matrix (ECM). Despite this, the potential complexities of the glomerular ECM in both health and disease are poorly understood. To explore whether genetic background and sex determine glomerular ECM composition, we investigated two mouse strains, FVB and B6, using RNA microarrays of isolated glomeruli combined with proteomic glomerular ECM analyses. These studies, undertaken in healthy young adult animals, revealed unique strain- and sex-dependent glomerular ECM signatures, which correlated with variations in levels of albuminuria and known predisposition to progressive nephropathy. Among the variation, we observed changes in netrin 4, fibroblast growth factor 2, tenascin C, collagen 1, meprin 1-α, and meprin 1-β. Differences in protein abundance were validated by quantitative immunohistochemistry and Western blot analysis, and the collective differences were not explained by mutations in known ECM or glomerular disease genes. Within the distinct signatures, we discovered a core set of structural ECM proteins that form multiple protein–protein interactions and are conserved from mouse to man. Furthermore, we found striking ultrastructural changes in glomerular basement membranes in FVB mice. Pathway analysis of merged transcriptomic and proteomic datasets identified potential ECM regulatory pathways involving inhibition of matrix metalloproteases, liver X receptor/retinoid X receptor, nuclear factor erythroid 2-related factor 2, notch, and cyclin-dependent kinase 5. These pathways may therefore alter ECM and confer susceptibility to disease. PMID:25896609
Cloke, Jonathan; Crowley, Erin; Bird, Patrick; Bastin, Ben; Flannery, Jonathan; Agin, James; Goins, David; Clark, Dorn; Radcliff, Roy; Wickstrand, Nina; Kauppinen, Mikko
2015-01-01
The Thermo Scientific™ SureTect™ Escherichia coli O157:H7 Assay is a new real-time PCR assay which has been validated through the AOAC Research Institute (RI) Performance Tested Methods(SM) program for raw beef and produce matrixes. This validation study specifically validated the assay with 375 g 1:4 and 1:5 ratios of raw ground beef and raw beef trim in comparison to the U.S. Department of Agriculture, Food Safety Inspection Service, Microbiology Laboratory Guidebook (USDS-FSIS/MLG) reference method and 25 g bagged spinach and fresh apple juice at a ratio of 1:10, in comparison to the reference method detailed in the International Organization for Standardization 16654:2001 reference method. For raw beef matrixes, the validation of both 1:4 and 1:5 allows user flexibility with the enrichment protocol, although which of these two ratios chosen by the laboratory should be based on specific test requirements. All matrixes were analyzed by Thermo Fisher Scientific, Microbiology Division, Vantaa, Finland, and Q Laboratories Inc, Cincinnati, Ohio, in the method developer study. Two of the matrixes (raw ground beef at both 1:4 and 1:5 ratios) and bagged spinach were additionally analyzed in the AOAC-RI controlled independent laboratory study, which was conducted by Marshfield Food Safety, Marshfield, Wisconsin. Using probability of detection statistical analysis, no significant difference was demonstrated by the SureTect kit in comparison to the USDA FSIS reference method for raw beef matrixes, or with the ISO reference method for matrixes of bagged spinach and apple juice. Inclusivity and exclusivity testing was conducted with 58 E. coli O157:H7 and 54 non-E. coli O157:H7 isolates, respectively, which demonstrated that the SureTect assay was able to detect all isolates of E. coli O157:H7 analyzed. In addition, all but one of the nontarget isolates were correctly interpreted as negative by the SureTect Software. The single isolate giving a positive result was an E. coli O157:NM isolate. Nonmotile isolates of E. coli O157 have been demonstrated to still contain the H7 gene; therefore, this result is not unexpected. Robustness testing was conducted to evaluate the performance of the SureTect assay with specific deviations to the assay protocol, which were outside the recommended parameters and which are open to variation. This study demonstrated that the SureTect assay gave reliable performance. A final study to verify the shelf life of the product, under accelerated conditions was also conducted.
Follow-up of the fate of imazalil from post-harvest lemon surface treatment to a baking experiment.
Vass, Andrea; Korpics, Evelin; Dernovics, Mihály
2015-01-01
Imazalil is one of the most widespread fungicides used for the post-harvest treatment of citrus species. The separate use of peel during food preparation and processing may hitherto concentrate most of the imazalil into food products, where specific maximum residue limits hardly exist for this fungicide. In order to monitor comprehensively the path of imazalil, our study covered the monitoring of the efficiency of several washing treatments, the comparison of operative and related sample preparation methods for the lemon samples, the validation of a sample preparation technique for a fatty cake matrix, the preparation of a model cake sample made separately either with imazalil containing lemon peel or with imazalil spiking, the monitoring of imazalil degradation into α-(2,4-dichlorophenyl)-1H-imidazole-1-ethanol because of the baking process, and finally the mass balance of imazalil throughout the washing experiments and the baking process. Quantification of imazalil was carried out with an LC-ESI-MS/MS set-up, while LC-QTOF was used for the monitoring of imazalil degradation. Concerning the washing, none of the addressed five washing protocols could remove more than 30% of imazalil from the surface of the lemon samples. The study revealed a significant difference between the extraction efficiency of imazalil by the EN 15662:2008 and AOAC 2007.1 methods, with the advantage of the former. The use of the model cake sample helped to validate a modified version of the EN 15662:2008 method that included a freeze-out step to efficiently recover imazalil (>90%) from the fatty cake matrix. The degradation of imazalil during the baking process was significantly higher when this analyte was spiked into the cake matrix than in the case of preparing the cake with imazalil-containing lemon peel (52% vs. 22%). This observation calls the attention to the careful evaluation of pesticide stability data that are based on solution spiking experiments.
A high performance porous flat-plate solar collector
NASA Technical Reports Server (NTRS)
Lansing, F. L.; Clarke, V.; Reynolds, R.
1979-01-01
A solar collector employing a porous matrix as a solar absorber and heat exchanger is presented and its application in solar air heaters is discussed. The collector is composed of a metallic matrix with a porous surface which acts as a large set of cavity radiators; cold air flows through the matrix plate and exchanges heat with the thermally stratified layers of the matrix. A steady-state thermal analysis of the collector is used to determine collector temperature distributions for the cases of an opaque surface matrix with total absorption of solar energy at the surface, and a diathermanous matrix with successive solar energy absorption at each depth. The theoretical performance of the porous flat plate collector is shown to exceed greatly that of a solid flat plate collector using air as the working medium for any given set of operational conditions. An experimental collector constructed using commercially available, low cost steel wool as the matrix has been found to have thermal efficiencies from 73 to 86%.
STANDARDIZATION AND VALIDATION OF MICROBIOLOGICAL METHODS FOR EXAMINATION OF BIOSOLIDS
The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within a complex matrix. Implications of ...
MICROORGANISMS IN BIOSOLIDS: ANALYTICAL METHODS DEVELOPMENT, STANDARDIZATION, AND VALIDATION
The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within such a complex matrix. Implicatio...
Generalized Reduction Formula for Discrete Wigner Functions of Multiqubit Systems
NASA Astrophysics Data System (ADS)
Srinivasan, K.; Raghavan, G.
2018-03-01
Density matrices and Discrete Wigner Functions are equally valid representations of multiqubit quantum states. For density matrices, the partial trace operation is used to obtain the quantum state of subsystems, but an analogous prescription is not available for discrete Wigner Functions. Further, the discrete Wigner function corresponding to a density matrix is not unique but depends on the choice of the quantum net used for its reconstruction. In the present work, we derive a reduction formula for discrete Wigner functions of a general multiqubit state which works for arbitrary quantum nets. These results would be useful for the analysis and classification of entangled states and the study of decoherence purely in a discrete phase space setting and also in applications to quantum computing.
Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.
2015-07-01
This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in themore » neutron field are reported.« less
A Flight Dynamics Model for a Multi-Actuated Flexible Rocket Vehicle
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2011-01-01
A comprehensive set of motion equations for a multi-actuated flight vehicle is presented. The dynamics are derived from a vector approach that generalizes the classical linear perturbation equations for flexible launch vehicles into a coupled three-dimensional model. The effects of nozzle and aerosurface inertial coupling, sloshing propellant, and elasticity are incorporated without restrictions on the position, orientation, or number of model elements. The present formulation is well suited to matrix implementation for large-scale linear stability and sensitivity analysis and is also shown to be extensible to nonlinear time-domain simulation through the application of a special form of Lagrange s equations in quasi-coordinates. The model is validated through frequency-domain response comparison with a high-fidelity planar implementation.
Method Development in Forensic Toxicology.
Peters, Frank T; Wissenbach, Dirk K; Busardo, Francesco Paolo; Marchei, Emilia; Pichini, Simona
2017-01-01
In the field of forensic toxicology, the quality of analytical methods is of great importance to ensure the reliability of results and to avoid unjustified legal consequences. A key to high quality analytical methods is a thorough method development. The presented article will provide an overview on the process of developing methods for forensic applications. This includes the definition of the method's purpose (e.g. qualitative vs quantitative) and the analytes to be included, choosing an appropriate sample matrix, setting up separation and detection systems as well as establishing a versatile sample preparation. Method development is concluded by an optimization process after which the new method is subject to method validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Wang, Jinglu; Qu, Susu; Wang, Weixiao; Guo, Liyuan; Zhang, Kunlin; Chang, Suhua; Wang, Jing
2016-11-01
Numbers of gene expression profiling studies of bipolar disorder have been published. Besides different array chips and tissues, variety of the data processes in different cohorts aggravated the inconsistency of results of these genome-wide gene expression profiling studies. By searching the gene expression databases, we obtained six data sets for prefrontal cortex (PFC) of bipolar disorder with raw data and combinable platforms. We used standardized pre-processing and quality control procedures to analyze each data set separately and then combined them into a large gene expression matrix with 101 bipolar disorder subjects and 106 controls. A standard linear mixed-effects model was used to calculate the differentially expressed genes (DEGs). Multiple levels of sensitivity analyses and cross validation with genetic data were conducted. Functional and network analyses were carried out on basis of the DEGs. In the result, we identified 198 unique differentially expressed genes in the PFC of bipolar disorder and control. Among them, 115 DEGs were robust to at least three leave-one-out tests or different pre-processing methods; 51 DEGs were validated with genetic association signals. Pathway enrichment analysis showed these DEGs were related with regulation of neurological system, cell death and apoptosis, and several basic binding processes. Protein-protein interaction network further identified one key hub gene. We have contributed the most comprehensive integrated analysis of bipolar disorder expression profiling studies in PFC to date. The DEGs, especially those with multiple validations, may denote a common signature of bipolar disorder and contribute to the pathogenesis of disease. Copyright © 2016 Elsevier Ltd. All rights reserved.
Salgueiro, Ana Rita; Pereira, Henrique Garcia; Rico, Maria-Teresa; Benito, Gerado; Díez-Herreo, Andrés
2008-02-01
A new statistical approach for preliminary risk evaluation of breakage in tailings dam is presented and illustrated by a case study regarding the Mediterranean region. The objective of the proposed method is to establish an empirical scale of risk, from which guidelines for prioritizing the collection of further specific information can be derived. The method relies on a historical database containing, in essence, two sets of qualitative data: the first set concerns the variables that are observable before the disaster (e.g., type and size of the dam, its location, and state of activity), and the second refers to the consequences of the disaster (e.g., failure type, sludge characteristics, fatalities categorization, and downstream range of damage). Based on a modified form of correspondence analysis, where the second set of attributes are projected as "supplementary variables" onto the axes provided by the eigenvalue decomposition of the matrix referring to the first set, a "qualitative regression" is performed, relating the variables to be predicted (contained in the second set) with the "predictors" (the observable variables). On the grounds of the previously derived relationship, the risk of breakage in a new case can be evaluated, given observable variables. The method was applied in a case study regarding a set of 13 test sites where the ranking of risk obtained was validated by expert knowledge. Once validated, the procedure was included in the final output of the e-EcoRisk UE project (A Regional Enterprise Network Decision-Support System for Environmental Risk and Disaster Management of Large-Scale Industrial Spills), allowing for a dynamic historical database updating and providing a prompt rough risk evaluation for a new case. The aim of this section of the global project is to provide a quantified context where failure cases occurred in the past for supporting analogue reasoning in preventing similar situations.
The development and validation of the Closed-set Mandarin Sentence (CMS) test.
Tao, Duo-Duo; Fu, Qian-Jie; Galvin, John J; Yu, Ya-Feng
2017-09-01
Matrix-styled sentence tests offer a closed-set paradigm that may be useful when evaluating speech intelligibility. Ideally, sentence test materials should reflect the distribution of phonemes within the target language. We developed and validated the Closed-set Mandarin Sentence (CMS) test to assess Mandarin speech intelligibility in noise. CMS test materials were selected to be familiar words and to represent the natural distribution of vowels, consonants, and lexical tones found in Mandarin Chinese. Ten key words in each of five categories (Name, Verb, Number, Color, and Fruit) were produced by a native Mandarin talker, resulting in a total of 50 words that could be combined to produce 100,000 unique sentences. Normative data were collected in 10 normal-hearing, adult Mandarin-speaking Chinese listeners using a closed-set test paradigm. Two test runs were conducted for each subject, and 20 sentences per run were randomly generated while ensuring that each word was presented only twice in each run. First, the level of the words in each category were adjusted to produce equal intelligibility in noise. Test-retest reliability for word-in-sentence recognition was excellent according to Cronbach's alpha (0.952). After the category level adjustments, speech reception thresholds (SRTs) for sentences in noise, defined as the signal-to-noise ratio (SNR) that produced 50% correct whole sentence recognition, were adaptively measured by adjusting the SNR according to the correctness of response. The mean SRT was -7.9 (SE=0.41) and -8.1 (SE=0.34) dB for runs 1 and 2, respectively. The mean standard deviation across runs was 0.93 dB, and paired t-tests showed no significant difference between runs 1 and 2 (p=0.74) despite random sentences being generated for each run and each subject. The results suggest that the CMS provides large stimulus set with which to repeatedly and reliably measure Mandarin-speaking listeners' speech understanding in noise using a closed-set paradigm.
16 CFR 303.10 - Fiber content of special types of products.
Code of Federal Regulations, 2010 CFR
2010-01-01
... percentages of such components by weight. (2) If the components of such fibers are of a matrix-fibril configuration, the term matrix-fibril fiber or matrix fiber may be used in setting forth the information...% Biconstituent Fiber (65% Nylon, 35% Polyester) 80% Matrix Fiber (60% Nylon, 40% Polyester) 15% Polyester 5...
State-Space System Realization with Input- and Output-Data Correlation
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan
1997-01-01
This paper introduces a general version of the information matrix consisting of the autocorrelation and cross-correlation matrices of the shifted input and output data. Based on the concept of data correlation, a new system realization algorithm is developed to create a model directly from input and output data. The algorithm starts by computing a special type of correlation matrix derived from the information matrix. The special correlation matrix provides information on the system-observability matrix and the state-vector correlation. A system model is then developed from the observability matrix in conjunction with other algebraic manipulations. This approach leads to several different algorithms for computing system matrices for use in representing the system model. The relationship of the new algorithms with other realization algorithms in the time and frequency domains is established with matrix factorization of the information matrix. Several examples are given to illustrate the validity and usefulness of these new algorithms.
Structured decomposition design of partial Mueller matrix polarimeters.
Alenin, Andrey S; Scott Tyo, J
2015-07-01
Partial Mueller matrix polarimeters (pMMPs) are active sensing instruments that probe a scattering process with a set of polarization states and analyze the scattered light with a second set of polarization states. Unlike conventional Mueller matrix polarimeters, pMMPs do not attempt to reconstruct the entire Mueller matrix. With proper choice of generator and analyzer states, a subset of the Mueller matrix space can be reconstructed with fewer measurements than that of the full Mueller matrix polarimeter. In this paper we consider the structure of the Mueller matrix and our ability to probe it using a reduced number of measurements. We develop analysis tools that allow us to relate the particular choice of generator and analyzer polarization states to the portion of Mueller matrix space that the instrument measures, as well as develop an optimization method that is based on balancing the signal-to-noise ratio of the resulting instrument with the ability of that instrument to accurately measure a particular set of desired polarization components with as few measurements as possible. In the process, we identify 10 classes of pMMP systems, for which the space coverage is immediately known. We demonstrate the theory with a numerical example that designs partial polarimeters for the task of monitoring the damage state of a material as presented earlier by Hoover and Tyo [Appl. Opt.46, 8364 (2007)10.1364/AO.46.008364APOPAI1559-128X]. We show that we can reduce the polarimeter to making eight measurements while still covering the Mueller matrix subspace spanned by the objects.
Encoding the structure of many-body localization with matrix product operators
NASA Astrophysics Data System (ADS)
Pekker, David; Clark, Bryan K.
2017-01-01
Anderson insulators are noninteracting disordered systems which have localized single-particle eigenstates. The interacting analog of Anderson insulators are the many-body localized (MBL) phases. The spectrum of the many-body eigenstates of an Anderson insulator is efficiently represented as a set of product states over the single-particle modes. We show that product states over matrix product operators of small bond dimension is the corresponding efficient description of the spectrum of an MBL insulator. In this language all of the many-body eigenstates are encoded by matrix product states (i.e., density matrix renormalization group wave functions) consisting of only two sets of low bond dimension matrices per site: the Gi matrices corresponding to the local ground state on site i and the Ei matrices corresponding to the local excited state. All 2n eigenstates can be generated from all possible combinations of these sets of matrices.
NLTE steady-state response matrix method.
NASA Astrophysics Data System (ADS)
Faussurier, G.; More, R. M.
2000-05-01
A connection between atomic kinetics and non-equilibrium thermodynamics has been recently established by using a collisional-radiative model modified to include line absorption. The calculated net emission can be expressed as a non-local thermodynamic equilibrium (NLTE) symmetric response matrix. In the paper, this connection is extended to both cases of the average-atom model and the Busquet's model (RAdiative-Dependent IOnization Model, RADIOM). The main properties of the response matrix still remain valid. The RADIOM source function found in the literature leads to a diagonal response matrix, stressing the absence of any frequency redistribution among the frequency groups at this order of calculation.
Psotta, Rudolf; Abdollahipour, Reza
2017-12-01
The Movement Assessment Battery for Children-2nd Edition (MABC-2) is a test of motor development, widely used in clinical and research settings. To address which motor abilities are actually captured by the motor tasks in the two age versions of the MABC-2, the AB2 for 7- 10-year-olds and the AB3 for 11- 16-year-olds, we examined AB2 and AB3 factorial validity. We conducted confirmatory factor analysis (SPSS AMOS 22.0) on data from the test's standardization samples of children aged 7-10, n = 483, and 11-16, n = 674, in order to find the best fitting models. The covariance matrix of AB2 and AB3 fit a three-factor model that included tasks of manual dexterity, aiming and catching, and balance. However, factor analytic models fitting AB2 and AB3 did not involve the dynamic balance tasks of hopping with the better leg and hopping with the other leg; and the drawing trail showed very low factor validity. In sum, both AB2 and AB3 of the MABC-2 test are able to discriminate between the three specific motor abilities; but due to questionable psychometric quality, the drawing trail and hopping tasks should be modified to improve the construct validity for both age versions of the MABC-2.
Shende, Ravindra; Patel, Ganesh
2017-01-01
Objective of present study is to determine optimum value of DLG and its validation prior to being incorporated in TPS for Varian TrueBeam™ millennium 120 leaves MLC. Partial transmission through the rounded leaf ends of the Multi Leaf Collimator (MLC) causes a conflict between the edges of the light field and radiation field. Parameter account for this partial transmission is called Dosimetric Leaf Gap (DLG). The complex high precession technique, such as Intensity Modulated Radiation Therapy (IMRT), entails the modeling of optimum value of DLG inside Eclipse Treatment Planning System (TPS) for precise dose calculation. Distinct synchronized uniformed extension of sweeping dynamic MLC leaf gap fields created by Varian MLC shaper software were use to determine DLG. DLG measurements performed with both 0.13 cc semi-flex ionization chamber and 2D-Array I-Matrix were used to validate the DLG; similarly, values of DLG from TPS were estimated from predicted dose. Similar mathematical approaches were employed to determine DLG from delivered and TPS predicted dose. DLG determined from delivered dose measured with both ionization chamber (DLG Ion ) and I-Matrix (DLG I-Matrix ) compared with DLG estimate from TPS predicted dose (DLG TPS ). Measurements were carried out for all available 6MV, 10MV, 15MV, 6MVFFF and 10MVFFF beam energies. Maximum and minimum DLG deviation between measured and TPS calculated DLG was found to be 0.2 mm and 0.1 mm, respectively. Both of the measured DLGs (DLG Ion and DLG I-Matrix ) were found to be in a very good agreement with estimated DLG from TPS (DLG TPS ). Proposed method proved to be helpful in verifying and validating the DLG value prior to its clinical implementation in TPS.
Cloke, Jonathan; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron
2014-01-01
The Thermo Scientific SureTect Listeria species Assay is a new real-time PCR assay for the detection of all species of Listeria in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested Methods program to validate the SureTect Listeria species Assay in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996 including amendment 1:2004 in a variety of foods plus plastic and stainless steel. The food matrixes validated were smoked salmon, processed cheese, fresh bagged spinach, cantaloupe, cooked prawns, cooked sliced turkey meat, cooked sliced ham, salami, pork frankfurters, and raw ground beef. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, fresh bagged spinach, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled independent laboratory study by the University ofGuelph, Canada. Using probability of detection statistical analysis, a significant difference in favour of the SureTect assay was demonstrated between the SureTect and reference method for high level spiked samples of pork frankfurters, smoked salmon, cooked prawns, stainless steel, and low-spiked samples of salami. For all other matrixes, no significant difference was seen between the two methods during the study. Inclusivity testing was conducted with 68 different isolates of Listeria species, all of which were detected by the SureTect Listeria species Assay. None of the 33 exclusivity isolates were detected by the SureTect Listeria species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation, which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.
Evaluation of the Thermo Scientific™ SureTect™ Listeria species Assay.
Cloke, Jonathan; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko
2014-03-01
The Thermo Scientific™ SureTect™ Listeria species Assay is a new real-time PCR assay for the detection of all species of Listeria in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested MethodsSM program to validate the SureTect Listeria species Assay in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996 including amendment 1:2004 in a variety of foods plus plastic and stainless steel. The food matrixes validated were smoked salmon, processed cheese, fresh bagged spinach, cantaloupe, cooked prawns, cooked sliced turkey meat, cooked sliced ham, salami, pork frankfurters, and raw ground beef. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, fresh bagged spinach, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled independent laboratory study by the University of Guelph, Canada. Using probability of detection statistical analysis, a significant difference in favour of the SureTect assay was demonstrated between the SureTect and reference method for high level spiked samples of pork frankfurters, smoked salmon, cooked prawns, stainless steel, and low-spiked samples of salami. For all other matrixes, no significant difference was seen between the two methods during the study. Inclusivity testing was conducted with 68 different isolates of Listeria species, all of which were detected by the SureTect Listeria species Assay. None of the 33 exclusivity isolates were detected by the SureTect Listeria species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation, which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falco, Maria Daniela, E-mail: mdanielafalco@hotmail.co; Fontanarosa, Davide; Miceli, Roberto
2011-04-01
Cone-beam X-ray volumetric imaging in the treatment room, allows online correction of set-up errors and offline assessment of residual set-up errors and organ motion. In this study the registration algorithm of the X-ray volume imaging software (XVI, Elekta, Crawley, United Kingdom), which manages a commercial cone-beam computed tomography (CBCT)-based positioning system, has been tested using a homemade and an anthropomorphic phantom to: (1) assess its performance in detecting known translational and rotational set-up errors and (2) transfer the transformation matrix of its registrations into a commercial treatment planning system (TPS) for offline organ motion analysis. Furthermore, CBCT dose index hasmore » been measured for a particular site (prostate: 120 kV, 1028.8 mAs, approximately 640 frames) using a standard Perspex cylindrical body phantom (diameter 32 cm, length 15 cm) and a 10-cm-long pencil ionization chamber. We have found that known displacements were correctly calculated by the registration software to within 1.3 mm and 0.4{sup o}. For the anthropomorphic phantom, only translational displacements have been considered. Both studies have shown errors within the intrinsic uncertainty of our system for translational displacements (estimated as 0.87 mm) and rotational displacements (estimated as 0.22{sup o}). The resulting table translations proposed by the system to correct the displacements were also checked with portal images and found to place the isocenter of the plan on the linac isocenter within an error of 1 mm, which is the dimension of the spherical lead marker inserted at the center of the homemade phantom. The registration matrix translated into the TPS image fusion module correctly reproduced the alignment between planning CT scans and CBCT scans. Finally, measurements on the CBCT dose index indicate that CBCT acquisition delivers less dose than conventional CT scans and electronic portal imaging device portals. The registration software was found to be accurate, and its registration matrix can be easily translated into the TPS and a low dose is delivered to the patient during image acquisition. These results can help in designing imaging protocols for offline evaluations.« less
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare Department of Health and Human Services ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
Inverse consistent non-rigid image registration based on robust point set matching
2014-01-01
Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889
Tian, Kuang-da; Qiu, Kai-xian; Li, Zu-hong; Lü, Ya-qiong; Zhang, Qiu-ju; Xiong, Yan-mei; Min, Shun-geng
2014-12-01
The purpose of the present paper is to determine calcium and magnesium in tobacco using NIR combined with least squares-support vector machine (LS-SVM). Five hundred ground and dried tobacco samples from Qujing city, Yunnan province, China, were surveyed by a MATRIX-I spectrometer (Bruker Optics, Bremen, Germany). At the beginning of data processing, outliers of samples were eliminated for stability of the model. The rest 487 samples were divided into several calibration sets and validation sets according to a hybrid modeling strategy. Monte-Carlo cross validation was used to choose the best spectral preprocess method from multiplicative scatter correction (MSC), standard normal variate transformation (SNV), S-G smoothing, 1st derivative, etc., and their combinations. To optimize parameters of LS-SVM model, the multilayer grid search and 10-fold cross validation were applied. The final LS-SVM models with the optimizing parameters were trained by the calibration set and accessed by 287 validation samples picked by Kennard-Stone method. For the quantitative model of calcium in tobacco, Savitzky-Golay FIR smoothing with frame size 21 showed the best performance. The regularization parameter λ of LS-SVM was e16.11, while the bandwidth of the RBF kernel σ2 was e8.42. The determination coefficient for prediction (Rc(2)) was 0.9755 and the determination coefficient for prediction (Rp(2)) was 0.9422, better than the performance of PLS model (Rc(2)=0.9593, Rp(2)=0.9344). For the quantitative analysis of magnesium, SNV made the regression model more precise than other preprocess. The optimized λ was e15.25 and σ2 was e6.32. Rc(2) and Rp(2) were 0.9961 and 0.9301, respectively, better than PLS model (Rc(2)=0.9716, Rp(2)=0.8924). After modeling, the whole progress of NIR scan and data analysis for one sample was within tens of seconds. The overall results show that NIR spectroscopy combined with LS-SVM can be efficiently utilized for rapid and accurate analysis of calcium and magnesium in tobacco.
The Impact of Goal Setting and Empowerment on Governmental Matrix Organizations
1993-09-01
shared. In a study of matrix management, Eduardo Vasconcellos further describes various matrix structures in the Galbraith model. In a functional...Technology/LAR, Wright-Patterson AFB OH, 1992. Vasconcellos , Eduardo . "A Model For a Better Understanding of the Matrix Structure," IEEE Transactions on...project matrix, the project manager maintains more influence and the structure lies to the right-of center ( Vasconcellos , 1979:58). Different Types of
General linear codes for fault-tolerant matrix operations on processor arrays
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Abraham, J. A.
1988-01-01
Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moester, Martiene J.C.; Schoeman, Monique A.E.; Oudshoorn, Ineke B.
2014-01-03
Highlights: •We validate a simple and fast method of quantification of in vitro mineralization. •Fluorescently labeled agents can detect calcium deposits in the mineralized matrix of cell cultures. •Fluorescent signals of the probes correlated with Alizarin Red S staining. -- Abstract: Alizarin Red S staining is the standard method to indicate and quantify matrix mineralization during differentiation of osteoblast cultures. KS483 cells are multipotent mouse mesenchymal progenitor cells that can differentiate into chondrocytes, adipocytes and osteoblasts and are a well-characterized model for the study of bone formation. Matrix mineralization is the last step of differentiation of bone cells and ismore » therefore a very important outcome measure in bone research. Fluorescently labelled calcium chelating agents, e.g. BoneTag and OsteoSense, are currently used for in vivo imaging of bone. The aim of the present study was to validate these probes for fast and simple detection and quantification of in vitro matrix mineralization by KS483 cells and thus enabling high-throughput screening experiments. KS483 cells were cultured under osteogenic conditions in the presence of compounds that either stimulate or inhibit osteoblast differentiation and thereby matrix mineralization. After 21 days of differentiation, fluorescence of stained cultures was quantified with a near-infrared imager and compared to Alizarin Red S quantification. Fluorescence of both probes closely correlated to Alizarin Red S staining in both inhibiting and stimulating conditions. In addition, both compounds displayed specificity for mineralized nodules. We therefore conclude that this method of quantification of bone mineralization using fluorescent compounds is a good alternative for the Alizarin Red S staining.« less
NASA Astrophysics Data System (ADS)
Adhikari, Nilanjan; Amin, Sk. Abdul; Saha, Achintya; Jha, Tarun
2018-03-01
Matrix metalloproteinase-2 (MMP-2) is a promising pharmacological target for designing potential anticancer drugs. MMP-2 plays critical functions in apoptosis by cleaving the DNA repair enzyme namely poly (ADP-ribose) polymerase (PARP). Moreover, MMP-2 expression triggers the vascular endothelial growth factor (VEGF) having a positive influence on tumor size, invasion, and angiogenesis. Therefore, it is an urgent need to develop potential MMP-2 inhibitors without any toxicity but better pharmacokinetic property. In this article, robust validated multi-quantitative structure-activity relationship (QSAR) modeling approaches were attempted on a dataset of 222 MMP-2 inhibitors to explore the important structural and pharmacophoric requirements for higher MMP-2 inhibition. Different validated regression and classification-based QSARs, pharmacophore mapping and 3D-QSAR techniques were performed. These results were challenged and subjected to further validation to explain 24 in house MMP-2 inhibitors to judge the reliability of these models further. All these models were individually validated internally as well as externally and were supported and validated by each other. These results were further justified by molecular docking analysis. Modeling techniques adopted here not only helps to explore the necessary structural and pharmacophoric requirements but also for the overall validation and refinement techniques for designing potential MMP-2 inhibitors.
Using Experiential Methods To Teach about Measurement Validity.
ERIC Educational Resources Information Center
Alderfer, Clayton P.
2003-01-01
Indirectly, instructor behavior provided two models for using a multitrait-multimethod matrix. Students who formulated their own concept, created empirical indicators, and assessed convergent and discriminant validity had better results than those who, influenced by classroom authority dynamics, followed a poorly formulated concept with a…
The Lehmer Matrix and Its Recursive Analogue
2010-01-01
LU factorization of matrix A by considering det A = det U = ∏n i=1 2i−1 i2 . The nth Catalan number is given in terms of binomial coefficients by Cn...for failing to comply with a collection of information if it does not display a currently valid OMB control number . 1. REPORT DATE 2010 2. REPORT...TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE The Lehmer matrix and its recursive analogue 5a. CONTRACT NUMBER 5b
CMC Research at NASA Glenn in 2016: Recent Progress and Plans
NASA Technical Reports Server (NTRS)
Grady, Joseph E.
2016-01-01
As part of NASA's Aeronautical Sciences project, Glenn Research Center has developed advanced fiber and matrix constituents for a 2700 degrees Fahrenheit CMC (Ceramic Matrix Composite) for turbine engine applications. Fiber and matrix development and characterization will be reviewed. Resulting improvements in CMC mechanical properties and durability will be summarized. Plans for 2015 will be described, including development and validation of models predicting effects of the engine environment on durability of SiCSiC composites with Environmental Barrier Coatings (EBCs).
Zhang, Dan; Wang, Xiaolin; Liu, Man; Zhang, Lina; Deng, Ming; Liu, Huichen
2015-01-01
A rapid, sensitive and accurate ICP-MS method using alternate analyte-free matrix for calibration standards preparation and a rapid direct dilution procedure for sample preparation was developed and validated for the quantification of exogenous strontium (Sr) from the drug in human serum. Serum was prepared by direct dilution (1:29, v/v) in an acidic solution consisting of nitric acid (0.1%) and germanium (Ge) added as internal standard (IS), to obtain simple and high-throughput preparation procedure with minimized matrix effect, and good repeatability. ICP-MS analysis was performed using collision cell technology (CCT) mode. Alternate matrix method by using distilled water as an alternate analyte-free matrix for the preparation of calibration standards (CS) was used to avoid the influence of endogenous Sr in serum on the quantification. The method was validated in terms of selectivity, carry-over, matrix effects, lower limit of quantification (LLOQ), linearity, precision and accuracy, and stability. Instrumental linearity was verified in the range of 1.00-500ng/mL, corresponding to a concentration range of 0.0300-15.0μg/mL in 50μL sample of serum matrix and alternate matrix. Intra- and inter-day precision as relative standard deviation (RSD) were less than 8.0% and accuracy as relative error (RE) was within ±3.0%. The method allowed a high sample throughput, and was sensitive and accurate enough for a pilot bioequivalence study in healthy male Chinese subjects following single oral administration of two strontium ranelate formulations containing 2g strontium ranelate. Copyright © 2014 Elsevier GmbH. All rights reserved.
Fukui, Atsuko; Fujii, Ryuta; Yonezawa, Yorinobu; Sunada, Hisakazu
2004-03-01
The release properties of phenylpropanolamine hydrochloride (PPA) from ethylcellulose (EC) matrix granules prepared by an extrusion granulation method were examined. The release process could be divided into two parts; the first and second stages were analyzed by applying square-root time law and cube-root law equations, respectively. The validity of the treatments was confirmed by the fitness of a simulation curve with the measured curve. In the first stage, PPA was released from the gel layer of swollen EC in the matrix granules. In the second stage, the drug existing below the gel layer dissolved and was released through the gel layer. The effect of the binder solution on the release from EC matrix granules was also examined. The binder solutions were prepared from various EC and ethanol (EtOH) concentrations. The media changed from a good solvent to a poor solvent with decreasing EtOH concentration. The matrix structure changed from loose to compact with increasing EC concentration. The preferable EtOH concentration region was observed when the release process was easily predictable. The time and release ratio at the connection point of the simulation curves were also examined to determine the validity of the analysis.
Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection
NASA Astrophysics Data System (ADS)
Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd
2015-02-01
Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.
NASA Astrophysics Data System (ADS)
Ward, Logan; Liu, Ruoqian; Krishna, Amar; Hegde, Vinay I.; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris
2017-07-01
While high-throughput density functional theory (DFT) has become a prevalent tool for materials discovery, it is limited by the relatively large computational cost. In this paper, we explore using DFT data from high-throughput calculations to create faster, surrogate models with machine learning (ML) that can be used to guide new searches. Our method works by using decision tree models to map DFT-calculated formation enthalpies to a set of attributes consisting of two distinct types: (i) composition-dependent attributes of elemental properties (as have been used in previous ML models of DFT formation energies), combined with (ii) attributes derived from the Voronoi tessellation of the compound's crystal structure. The ML models created using this method have half the cross-validation error and similar training and evaluation speeds to models created with the Coulomb matrix and partial radial distribution function methods. For a dataset of 435 000 formation energies taken from the Open Quantum Materials Database (OQMD), our model achieves a mean absolute error of 80 meV/atom in cross validation, which is lower than the approximate error between DFT-computed and experimentally measured formation enthalpies and below 15% of the mean absolute deviation of the training set. We also demonstrate that our method can accurately estimate the formation energy of materials outside of the training set and be used to identify materials with especially large formation enthalpies. We propose that our models can be used to accelerate the discovery of new materials by identifying the most promising materials to study with DFT at little additional computational cost.
NASA Astrophysics Data System (ADS)
Schaumann, Ina; Malzer, Wolfgang; Mantouvalou, Ioanna; Lühl, Lars; Kanngießer, Birgit; Dargel, Rainer; Giese, Ulrich; Vogt, Carla
2009-04-01
For the validation of the quantification of the newly-developed method of 3D Micro X-ray fluorescence spectroscopy (3D Micro-XRF) samples with a low average Z matrix and minor high Z elements are best suited. In a light matrix the interferences by matrix effects are minimized so that organic polymers are appropriate as basis for analytes which are more easily detected by X-ray fluorescence spectroscopy. Polymer layer systems were assembled from single layers of ethylene-propylene-diene rubber (EPDM) filled with changing concentrations of silica and zinc oxide as inorganic additives. Layer thicknesses were in the range of 30-150 μm. Before the analysis with 3D Micro-XRF all layers have been characterized by scanning micro-XRF with regard to filler dispersion, by infrared microscopy and light microscopy in order to determine the layer thicknesses and by ICP-OES to verify the concentration of the X-ray sensitive elements in the layers. With the results obtained for stacked polymer systems the validity of the analytical quantification model for the determination of stratified materials by 3D Micro-XRF could be demonstrated.
Noegrohati, Sri; Hernadi, Elan; Asviastuti, Syanti
2018-06-01
Production of red flesh dragon fruit (Hylocereus polyrhizus) was hampered by Colletotrichum sp. Pre-harvest application of azoxystrobin and difenoconazole mixture is recommended, therefore, a selective and sensitive multi residues analytical method is required in monitoring and evaluating the commodity's safety. LC-MS/MS is a well-established analytical technique for qualitative and quantitative determination in complex matrices. However, this method is hurdled by co-eluted coextractives interferences. This work evaluated the pH effect of acetate buffered and citrate buffered QuEChERS sample preparation in their effectiveness of matrix effect reduction. Citrate buffered QuEChERS proved to produce clean final extract with relative matrix effect 0.4%-0.7%. Method validation of the selected sample preparation followed by LC-MS/MS for whole dragon fruit, flesh and peel matrices fortified at 0.005, 0.01, 0.1 and 1 g/g showed recoveries 75%-119%, intermediate repeatability 2%-14%. The expanded uncertainties were 7%-48%. Based on the international acceptance criteria, this method is valid.
Developmental Validation of a novel 5 dye Y-STR System comprising the 27 YfilerPlus loci
Bai, Rufeng; Liu, Yaju; Li, Zheng; Jin, Haiying; Tian, Qinghua; Shi, Meisen; Ma, Shuhua
2016-01-01
In this study, a new STRtyper-27 system, including the same Yfiler Plus loci (DYS19, DYS389I, DYS389II, DYS390, DYS391, DYS392, DYS393, DYS385a/b, DYS437, DYS438, DYS439, DYS448, DYS456, DYS458, DYS635, Y-GATA H4, DYS449, DYS460, DYS481, DYS518, DYS533, DYS570, DYS576, DYS627 and DYF387S1a/b), was established using a set of 5 fluorescent dye labels. Primers, internal size standard, allelic ladders and matrix standard set were designed and created in-house for this multiplex system. This paper describes the validation studies conducted with the STRtyper-27Y system using a 3130XL genetic analyzer for fragment length detection that included the analysis of the following parameters and aspects: sensitivity, species specificity, inhibition, haplotype concordance, precision, stutter, DNA mixtures, and stability studies with crime scene samples. The studies demonstrated, that the STRtyper-27Y system provided equivalent overall performance comparable to the latest Yfiler Plus kit, but with enhanced compatibility in terms of instrument platforms and software allowing forensic laboratories to conduct its forensic application and evaluate its performance, all in their own 5 dye Y-STR chemistry system /environment without software or instrument upgrades. PMID:27406339
Niedhammer, Isabelle; Chastang, Jean-François; Levy, David; David, Simone; Degioanni, Stéphanie; Theorell, Töres
2008-10-01
To construct and evaluate the validity of a job-exposure matrix (JEM) for psychosocial work factors defined by Karasek's model using national representative data of the French working population. National sample of 24,486 men and women who filled in the Job Content Questionnaire (JCQ) by Karasek measuring the scores of psychological demands, decision latitude, and social support (individual scores) in 2003 (response rate 96.5%). Median values of the three scores in the total sample of men and women were used to define high demands, low latitude, and low support (individual binary exposures). Job title was defined by both occupation and economic activity that were coded using detailed national classifications (PCS and NAF/NACE). Two JEM measures were calculated from the individual scores of demands, latitude and support for each job title: JEM scores (mean of the individual score) and JEM binary exposures (JEM score dichotomized at the median). The analysis of the variance of the individual scores of demands, latitude, and support explained by occupations and economic activities, of the correlation and agreement between individual measures and JEM measures, and of the sensitivity and specificity of JEM exposures, as well as the study of the associations with self-reported health showed a low validity of JEM measures for psychological demands and social support, and a relatively higher validity for decision latitude compared with individual measures. Job-exposure matrix measure for decision latitude might be used as a complementary exposure assessment. Further research is needed to evaluate the validity of JEM for psychosocial work factors.
Nielsen, Morten; Lundegaard, Claus; Lund, Ole
2007-01-01
Background Antigen presenting cells (APCs) sample the extra cellular space and present peptides from here to T helper cells, which can be activated if the peptides are of foreign origin. The peptides are presented on the surface of the cells in complex with major histocompatibility class II (MHC II) molecules. Identification of peptides that bind MHC II molecules is thus a key step in rational vaccine design and developing methods for accurate prediction of the peptide:MHC interactions play a central role in epitope discovery. The MHC class II binding groove is open at both ends making the correct alignment of a peptide in the binding groove a crucial part of identifying the core of an MHC class II binding motif. Here, we present a novel stabilization matrix alignment method, SMM-align, that allows for direct prediction of peptide:MHC binding affinities. The predictive performance of the method is validated on a large MHC class II benchmark data set covering 14 HLA-DR (human MHC) and three mouse H2-IA alleles. Results The predictive performance of the SMM-align method was demonstrated to be superior to that of the Gibbs sampler, TEPITOPE, SVRMHC, and MHCpred methods. Cross validation between peptide data set obtained from different sources demonstrated that direct incorporation of peptide length potentially results in over-fitting of the binding prediction method. Focusing on amino terminal peptide flanking residues (PFR), we demonstrate a consistent gain in predictive performance by favoring binding registers with a minimum PFR length of two amino acids. Visualizing the binding motif as obtained by the SMM-align and TEPITOPE methods highlights a series of fundamental discrepancies between the two predicted motifs. For the DRB1*1302 allele for instance, the TEPITOPE method favors basic amino acids at most anchor positions, whereas the SMM-align method identifies a preference for hydrophobic or neutral amino acids at the anchors. Conclusion The SMM-align method was shown to outperform other state of the art MHC class II prediction methods. The method predicts quantitative peptide:MHC binding affinity values, making it ideally suited for rational epitope discovery. The method has been trained and evaluated on the, to our knowledge, largest benchmark data set publicly available and covers the nine HLA-DR supertypes suggested as well as three mouse H2-IA allele. Both the peptide benchmark data set, and SMM-align prediction method (NetMHCII) are made publicly available. PMID:17608956
Nielsen, Morten; Lundegaard, Claus; Lund, Ole
2007-07-04
Antigen presenting cells (APCs) sample the extra cellular space and present peptides from here to T helper cells, which can be activated if the peptides are of foreign origin. The peptides are presented on the surface of the cells in complex with major histocompatibility class II (MHC II) molecules. Identification of peptides that bind MHC II molecules is thus a key step in rational vaccine design and developing methods for accurate prediction of the peptide:MHC interactions play a central role in epitope discovery. The MHC class II binding groove is open at both ends making the correct alignment of a peptide in the binding groove a crucial part of identifying the core of an MHC class II binding motif. Here, we present a novel stabilization matrix alignment method, SMM-align, that allows for direct prediction of peptide:MHC binding affinities. The predictive performance of the method is validated on a large MHC class II benchmark data set covering 14 HLA-DR (human MHC) and three mouse H2-IA alleles. The predictive performance of the SMM-align method was demonstrated to be superior to that of the Gibbs sampler, TEPITOPE, SVRMHC, and MHCpred methods. Cross validation between peptide data set obtained from different sources demonstrated that direct incorporation of peptide length potentially results in over-fitting of the binding prediction method. Focusing on amino terminal peptide flanking residues (PFR), we demonstrate a consistent gain in predictive performance by favoring binding registers with a minimum PFR length of two amino acids. Visualizing the binding motif as obtained by the SMM-align and TEPITOPE methods highlights a series of fundamental discrepancies between the two predicted motifs. For the DRB1*1302 allele for instance, the TEPITOPE method favors basic amino acids at most anchor positions, whereas the SMM-align method identifies a preference for hydrophobic or neutral amino acids at the anchors. The SMM-align method was shown to outperform other state of the art MHC class II prediction methods. The method predicts quantitative peptide:MHC binding affinity values, making it ideally suited for rational epitope discovery. The method has been trained and evaluated on the, to our knowledge, largest benchmark data set publicly available and covers the nine HLA-DR supertypes suggested as well as three mouse H2-IA allele. Both the peptide benchmark data set, and SMM-align prediction method (NetMHCII) are made publicly available.
ERIC Educational Resources Information Center
Sousa, Joao Carlos; Costa, Manuel Joao; Palha, Joana Almeida
2010-01-01
The biochemistry and molecular biology of the extracellular matrix (ECM) is difficult to convey to students in a classroom setting in ways that capture their interest. The understanding of the matrix's roles in physiological and pathological conditions study will presumably be hampered by insufficient knowledge of its molecular structure.…
Development and Preliminary Validation of the Strategic Thinking Mindset Test (STMT)
2017-06-01
reliability. The test’s three subscales (intellectual flexibility, inclusiveness, and humility) each correlated significantly with alternative measures of...34 TABLE 9. STAGE 4 SAMPLE DEMOGRAPHICS ................................................................ 35 TABLE 10. INTERITEM CORRELATION ...MATRIX (ALL ITEMS) ...................................... 39 TABLE 11. ITEM-SCALE AND VALIDITY CORRELATIONS (ALL ITEMS) .................... 40
The Matrix Analogies Test: A Validity Study with the K-ABC.
ERIC Educational Resources Information Center
Smith, Douglas K.
The Matrix Analogies Test-Expanded Form (MAT-EF) and Kaufman Assessment Battery for Children (K-ABC) were administered in counterbalanced order to two randomly selected samples of students in grades 2 through 5. The MAT-EF was recently developed to measure non-verbal reasoning. The samples included 26 non-handicapped second graders in a rural…
Cook, Sarah F; King, Amber D; van den Anker, John N; Wilkins, Diana G
2015-12-15
Drug metabolism plays a key role in acetaminophen (paracetamol)-induced hepatotoxicity, and quantification of acetaminophen metabolites provides critical information about factors influencing susceptibility to acetaminophen-induced hepatotoxicity in clinical and experimental settings. The aims of this study were to develop, validate, and apply high-performance liquid chromatography-electrospray ionization-tandem mass spectrometry (HPLC-ESI-MS/MS) methods for simultaneous quantification of acetaminophen, acetaminophen-glucuronide, acetaminophen-sulfate, acetaminophen-glutathione, acetaminophen-cysteine, and acetaminophen-N-acetylcysteine in small volumes of human plasma and urine. In the reported procedures, acetaminophen-d4 and acetaminophen-d3-sulfate were utilized as internal standards (IS). Analytes and IS were recovered from human plasma (10μL) by protein precipitation with acetonitrile. Human urine (10μL) was prepared by fortification with IS followed only by sample dilution. Calibration concentration ranges were tailored to literature values for each analyte in each biological matrix. Prepared samples from plasma and urine were analyzed under the same HPLC-ESI-MS/MS conditions, and chromatographic separation was achieved through use of an Agilent Poroshell 120 EC-C18 column with a 20-min run time per injected sample. The analytes could be accurately and precisely quantified over 2.0-3.5 orders of magnitude. Across both matrices, mean intra- and inter-assay accuracies ranged from 85% to 112%, and intra- and inter-assay imprecision did not exceed 15%. Validation experiments included tests for specificity, recovery and ionization efficiency, inter-individual variability in matrix effects, stock solution stability, and sample stability under a variety of storage and handling conditions (room temperature, freezer, freeze-thaw, and post-preparative). The utility and suitability of the reported procedures were illustrated by analysis of pharmacokinetic samples collected from neonates receiving intravenous acetaminophen. Copyright © 2015 Elsevier B.V. All rights reserved.
Matrix form of Legendre polynomials for solving linear integro-differential equations of high order
NASA Astrophysics Data System (ADS)
Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.
2017-04-01
This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.
Chen, Mei; Collins, Erin M; Tao, Lin; Lu, Chensheng
2013-11-01
The neonicotinoids have recently been identified as a potential contributing factor to the sudden decline in adult honeybee population, commonly known as colony collapse disorder (CCD). To protect the health of honeybees and other pollinators, a new, simple, and sensitive liquid chromatography-electrospray ionization mass spectrometry method was developed and validated for simultaneous determination of eight neonicotinoids, including acetamiprid, clothianidin, dinotefuran, flonicamid, imidacloprid, nitenpyram, thiacloprid, and thiamethoxam, in pollen and high-fructose corn syrup (HFCS). In this method, eight neonicotinoids, along with their isotope-labeled internal standards, were extracted from 2 g of pollen or 5 g of HFCS using an optimized quick, easy, cheap, effective, rugged, and safe extraction procedure. The method limits of detection in pollen and HFCS matrices were 0.03 ng/g for acetamiprid, clothianidin, dinotefuran, imidacloprid, thiacloprid, and thiamethoxam and ranged between 0.03 and 0.1 ng/g for nitenpyram and flonicamid. The precision and accuracy were well within the acceptable 20% range. Selectivity, linearity, lower limit of quantitation, matrix effect, recovery, and stability in autosampler were also evaluated during validation. This validated method has been used successfully in analyzing a set of pollen and HFCS samples collected for evaluating potential honeybee exposure to neonicotinoids.
Koppel, Ross; Kuziemsky, Craig
2017-01-01
Usability of health information technology (HIT), if considered at all, is usually focused on individual providers, settings and vendors. However, in light of transformative models of healthcare delivery such as collaborative care delivery that crosses providers and settings, we need to think of usability as a collective and constantly emerging process. To address this new reality we develop a matrix of usability that spans several dimensions and contexts, incorporating differing vendors, user, settings, disciplines, and display configurations. The matrix, while conceptual, extends existing work by providing the means for discussion of usability issues and needs beyond one setting and one user type.
3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading
Cho, Nam-Hoon; Choi, Heung-Kook
2014-01-01
One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system. PMID:25371701
Hao, Ming; Wang, Yanli; Bryant, Stephen H
2016-02-25
Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision-recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. Published by Elsevier B.V.
Valid statistical inference methods for a case-control study with missing data.
Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun
2018-04-01
The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.
Mothers in "incest families": a critique of blame and its destructive sequels.
Green, J
1996-09-01
This paper critically reviewed the blame-oriented explanations of mothers' roles in father-daughter incest, which was contrasted with feminist reassessments in a sociopolitical context. The concept of the dysfunctional family forms the matrix in which views of blaming the mother take life. The mother is characterized as the ¿cornerstone¿ of the family dynamics that create and maintain the incestuous behavior of the spouse. Several categories of maternal behavior were reported to set up conditions in the family for father-daughter incest, including the contention that the mother colludes in the abuse either by unconscious passivity or by active conscious involvement in arranging the act. In addition, the mother's alleged inadequacies in the areas of intimacy and sexuality are often viewed as central factors in the dynamics of father-daughter incest. Identified as additional victims in the complex matrix of family and community, mothers are revictimized by the clinical establishment that upholds the unconscious patriarchal ideology underlying violence against women. Clinicians need to validate and support mothers in their ¿disenfranchised grief¿ so they can help their daughters to heal, and to design and lobby for programs that will promote social changes that are necessary for a more egalitarian society.
Study of the retardance of a birefringent waveplate at tilt incidence by Mueller matrix ellipsometer
NASA Astrophysics Data System (ADS)
Gu, Honggang; Chen, Xiuguo; Zhang, Chuanwei; Jiang, Hao; Liu, Shiyuan
2018-01-01
Birefringent waveplates are indispensable optical elements for polarization state modification in various optical systems. The retardance of a birefringent waveplate will change significantly when the incident angle of the light varies. Therefore, it is of great importance to study such field-of-view errors on the polarization properties, especially the retardance of a birefringent waveplate, for the performance improvement of the system. In this paper, we propose a generalized retardance formula at arbitrary incidence and azimuth for a general plane-parallel composite waveplate consisting of multiple aligned single waveplates. An efficient method and corresponding experimental set-up have been developed to characterize the retardance versus the field-of-view angle based on a constructed spectroscopic Mueller matrix ellipsometer. Both simulations and experiments on an MgF2 biplate over an incident angle of 0°-8° and an azimuthal angle of 0°-360° are presented as an example, and the dominant experimental errors are discussed and corrected. The experimental results strongly agree with the simulations with a maximum difference of 0.15° over the entire field of view, which indicates the validity and great potential of the presented method for birefringent waveplate characterization at tilt incidence.
Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction
NASA Astrophysics Data System (ADS)
Niu, Shanzhou; Yu, Gaohang; Ma, Jianhua; Wang, Jing
2018-02-01
Spectral computed tomography (CT) has been a promising technique in research and clinics because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of the different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifact suppression and resolution preservation.
Texture zeros and hierarchical masses from flavour (mis)alignment
NASA Astrophysics Data System (ADS)
Hollik, W. G.; Saldana-Salazar, U. J.
2018-03-01
We introduce an unconventional interpretation of the fermion mass matrix elements. As the full rotational freedom of the gauge-kinetic terms renders a set of infinite bases called weak bases, basis-dependent structures as mass matrices are unphysical. Matrix invariants, on the other hand, provide a set of basis-independent objects which are of more relevance. We employ one of these invariants to give a new parametrisation of the mass matrices. By virtue of it, one gains control over its implicit implications on several mass matrix structures. The key element is the trace invariant which resembles the equation of a hypersphere with a radius equal to the Frobenius norm of the mass matrix. With the concepts of alignment or misalignment we can identify texture zeros with certain alignments whereas Froggatt-Nielsen structures in the matrix elements are governed by misalignment. This method allows further insights of traditional approaches to the underlying flavour geometry.
Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven
2016-03-01
Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called 'rain'. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based 'experience matrix' that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, M; Craft, D
Purpose: To develop an efficient, pathway-based classification system using network biology statistics to assist in patient-specific response predictions to radiation and drug therapies across multiple cancer types. Methods: We developed PICS (Pathway Informed Classification System), a novel two-step cancer classification algorithm. In PICS, a matrix m of mRNA expression values for a patient cohort is collapsed into a matrix p of biological pathways. The entries of p, which we term pathway scores, are obtained from either principal component analysis (PCA), normal tissue centroid (NTC), or gene expression deviation (GED). The pathway score matrix is clustered using both k-means and hierarchicalmore » clustering, and a clustering is judged by how well it groups patients into distinct survival classes. The most effective pathway scoring/clustering combination, per clustering p-value, thus generates various ‘signatures’ for conventional and functional cancer classification. Results: PICS successfully regularized large dimension gene data, separated normal and cancerous tissues, and clustered a large patient cohort spanning six cancer types. Furthermore, PICS clustered patient cohorts into distinct, statistically-significant survival groups. For a suboptimally-debulked ovarian cancer set, the pathway-classified Kaplan-Meier survival curve (p = .00127) showed significant improvement over that of a prior gene expression-classified study (p = .0179). For a pancreatic cancer set, the pathway-classified Kaplan-Meier survival curve (p = .00141) showed significant improvement over that of a prior gene expression-classified study (p = .04). Pathway-based classification confirmed biomarkers for the pyrimidine, WNT-signaling, glycerophosphoglycerol, beta-alanine, and panthothenic acid pathways for ovarian cancer. Despite its robust nature, PICS requires significantly less run time than current pathway scoring methods. Conclusion: This work validates the PICS method to improve cancer classification using biological pathways. Patients are classified with greater specificity and physiological relevance as compared to current gene-specific approaches. Focus now moves to utilizing PICS for pan-cancer patient-specific treatment response prediction.« less
This work develops a novel validation approach for studying how non-volatile aerosol matrices of considerably different chemical composition potentially affect the thermal extraction (TE)/GC/MS quantification of a wide range of trace semivolatile organic markers. The non-volatil...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.
2015-07-01
This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results ofmore » 31 integral dosimetry measurements in the neutron field are reported.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vega, Richard Manuel; Parm, Edward J.; Griffin, Patrick J.
2015-07-01
This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integralmore » dosimetry measurements in the neutron field are reported.« less
A CLT on the SNR of Diagonally Loaded MVDR Filters
NASA Astrophysics Data System (ADS)
Rubio, Francisco; Mestre, Xavier; Hachem, Walid
2012-08-01
This paper studies the fluctuations of the signal-to-noise ratio (SNR) of minimum variance distorsionless response (MVDR) filters implementing diagonal loading in the estimation of the covariance matrix. Previous results in the signal processing literature are generalized and extended by considering both spatially as well as temporarily correlated samples. Specifically, a central limit theorem (CLT) is established for the fluctuations of the SNR of the diagonally loaded MVDR filter, under both supervised and unsupervised training settings in adaptive filtering applications. Our second-order analysis is based on the Nash-Poincar\\'e inequality and the integration by parts formula for Gaussian functionals, as well as classical tools from statistical asymptotic theory. Numerical evaluations validating the accuracy of the CLT confirm the asymptotic Gaussianity of the fluctuations of the SNR of the MVDR filter.
Functional and evolutionary insights from the Ciona notochord transcriptome.
Reeves, Wendy M; Wu, Yuye; Harder, Matthew J; Veeman, Michael T
2017-09-15
The notochord of the ascidian Ciona consists of only 40 cells, and is a longstanding model for studying organogenesis in a small, simple embryo. Here, we perform RNAseq on flow-sorted notochord cells from multiple stages to define a comprehensive Ciona notochord transcriptome. We identify 1364 genes with enriched expression and extensively validate the results by in situ hybridization. These genes are highly enriched for Gene Ontology terms related to the extracellular matrix, cell adhesion and cytoskeleton. Orthologs of 112 of the Ciona notochord genes have known notochord expression in vertebrates, more than twice as many as predicted by chance alone. This set of putative effector genes with notochord expression conserved from tunicates to vertebrates will be invaluable for testing hypotheses about notochord evolution. The full set of Ciona notochord genes provides a foundation for systems-level studies of notochord gene regulation and morphogenesis. We find only modest overlap between this set of notochord-enriched transcripts and the genes upregulated by ectopic expression of the key notochord transcription factor Brachyury, indicating that Brachyury is not a notochord master regulator gene as strictly defined. © 2017. Published by The Company of Biologists Ltd.
Clustering Multivariate Time Series Using Hidden Markov Models
Ghassempour, Shima; Girosi, Federico; Maeder, Anthony
2014-01-01
In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs), where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers. PMID:24662996
Normal form decomposition for Gaussian-to-Gaussian superoperators
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Palma, Giacomo; INFN, Pisa; Mari, Andrea
2015-05-15
In this paper, we explore the set of linear maps sending the set of quantum Gaussian states into itself. These maps are in general not positive, a feature which can be exploited as a test to check whether a given quantum state belongs to the convex hull of Gaussian states (if one of the considered maps sends it into a non-positive operator, the above state is certified not to belong to the set). Generalizing a result known to be valid under the assumption of complete positivity, we provide a characterization of these Gaussian-to-Gaussian (not necessarily positive) superoperators in terms ofmore » their action on the characteristic function of the inputs. For the special case of one-mode mappings, we also show that any Gaussian-to-Gaussian superoperator can be expressed as a concatenation of a phase-space dilatation, followed by the action of a completely positive Gaussian channel, possibly composed with a transposition. While a similar decomposition is shown to fail in the multi-mode scenario, we prove that it still holds at least under the further hypothesis of homogeneous action on the covariance matrix.« less
Finding Imaging Patterns of Structural Covariance via Non-Negative Matrix Factorization
Sotiras, Aristeidis; Resnick, Susan M.; Davatzikos, Christos
2015-01-01
In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. PMID:25497684
Randles, Michael J; Woolf, Adrian S; Huang, Jennifer L; Byron, Adam; Humphries, Jonathan D; Price, Karen L; Kolatsi-Joannou, Maria; Collinson, Sophie; Denny, Thomas; Knight, David; Mironov, Aleksandr; Starborg, Toby; Korstanje, Ron; Humphries, Martin J; Long, David A; Lennon, Rachel
2015-12-01
Glomerular disease often features altered histologic patterns of extracellular matrix (ECM). Despite this, the potential complexities of the glomerular ECM in both health and disease are poorly understood. To explore whether genetic background and sex determine glomerular ECM composition, we investigated two mouse strains, FVB and B6, using RNA microarrays of isolated glomeruli combined with proteomic glomerular ECM analyses. These studies, undertaken in healthy young adult animals, revealed unique strain- and sex-dependent glomerular ECM signatures, which correlated with variations in levels of albuminuria and known predisposition to progressive nephropathy. Among the variation, we observed changes in netrin 4, fibroblast growth factor 2, tenascin C, collagen 1, meprin 1-α, and meprin 1-β. Differences in protein abundance were validated by quantitative immunohistochemistry and Western blot analysis, and the collective differences were not explained by mutations in known ECM or glomerular disease genes. Within the distinct signatures, we discovered a core set of structural ECM proteins that form multiple protein-protein interactions and are conserved from mouse to man. Furthermore, we found striking ultrastructural changes in glomerular basement membranes in FVB mice. Pathway analysis of merged transcriptomic and proteomic datasets identified potential ECM regulatory pathways involving inhibition of matrix metalloproteases, liver X receptor/retinoid X receptor, nuclear factor erythroid 2-related factor 2, notch, and cyclin-dependent kinase 5. These pathways may therefore alter ECM and confer susceptibility to disease. Copyright © 2015 by the American Society of Nephrology.
Acceleration of GPU-based Krylov solvers via data transfer reduction
Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...
2015-04-08
Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less
Bertran, M J; Viñarás, M; Salamero, M; Garcia, F; Graham, C; McCulloch, A; Escarrabill, J
To develop and test a culturally adapted core set of questions to measure patients' experience after in-patient care. Following the methodology recommended by international guides, a basic set of patient experience questions, selected from Picker Institute Europe questionnaires (originally in English), was translated to Spanish and Catalan. Acceptability, construct validity and reliability of the adapted questionnaire were assessed via a cross-sectional validation study. The inclusion criteria were patients aged >18 years, discharged within one week to one month prior to questionnaire sending and whose email was available. Day cases, emergency department patients and deaths were excluded. Invitations were sent by email (N=876) and questionnaire was fulfilled through an online platform. An automatic reminder was sent 5 days later to non-respondents. A questionnaire, in Spanish and Catalan, with adequate conceptual and linguistic equivalence was obtained. Response rate was 44.4% (389 responses). The correlation matrix was factorable. Four factors were extracted with Parallel Analysis, which explained 43% of the total variance. First factor: information and communication received during discharge. Second factor: low sensitivity attitudes of professionals. Third factor: assessment of communication of medical and nursing staff. Fourth factor: global items. The value of the Cronbach alpha was 0.84, showing a high internal consistency. The obtained experience patient questionnaire, in Spanish and Catalan, shows good results in the psychometric properties evaluated and could be a useful tool to identify opportunities for health care improvement in our context. Email could become a feasible tool for greater patient participation in everything that concerns his health. Copyright © 2018 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.
Siwy, Justyna; Schanstra, Joost P.; Argiles, Angel; Bakker, Stephan J.L.; Beige, Joachim; Boucek, Petr; Brand, Korbinian; Delles, Christian; Duranton, Flore; Fernandez-Fernandez, Beatriz; Jankowski, Marie-Luise; Al Khatib, Mohammad; Kunt, Thomas; Lajer, Maria; Lichtinghagen, Ralf; Lindhardt, Morten; Maahs, David M; Mischak, Harald; Mullen, William; Navis, Gerjan; Noutsou, Marina; Ortiz, Alberto; Persson, Frederik; Petrie, John R.; Roob, Johannes M.; Rossing, Peter; Ruggenenti, Piero; Rychlik, Ivan; Serra, Andreas L.; Snell-Bergeon, Janet; Spasovski, Goce; Stojceva-Taneva, Olivera; Trillini, Matias; von der Leyen, Heiko; Winklhofer-Roob, Brigitte M.; Zürbig, Petra; Jankowski, Joachim
2014-01-01
Background Diabetic nephropathy (DN) is one of the major late complications of diabetes. Treatment aimed at slowing down the progression of DN is available but methods for early and definitive detection of DN progression are currently lacking. The ‘Proteomic prediction and Renin angiotensin aldosterone system Inhibition prevention Of early diabetic nephRopathy In TYpe 2 diabetic patients with normoalbuminuria trial’ (PRIORITY) aims to evaluate the early detection of DN in patients with type 2 diabetes (T2D) using a urinary proteome-based classifier (CKD273). Methods In this ancillary study of the recently initiated PRIORITY trial we aimed to validate for the first time the CKD273 classifier in a multicentre (9 different institutions providing samples from 165 T2D patients) prospective setting. In addition we also investigated the influence of sample containers, age and gender on the CKD273 classifier. Results We observed a high consistency of the CKD273 classification scores across the different centres with areas under the curves ranging from 0.95 to 1.00. The classifier was independent of age (range tested 16–89 years) and gender. Furthermore, the use of different urine storage containers did not affect the classification scores. Analysis of the distribution of the individual peptides of the classifier over the nine different centres showed that fragments of blood-derived and extracellular matrix proteins were the most consistently found. Conclusion We provide for the first time validation of this urinary proteome-based classifier in a multicentre prospective setting and show the suitability of the CKD273 classifier to be used in the PRIORITY trial. PMID:24589724
Danish translation and validation of Kessler's 10-item psychological distress scale - K10.
Thelin, Camilla; Mikkelsen, Benjamin; Laier, Gunnar; Turgut, Louise; Henriksen, Bente; Olsen, Lis Raabaek; Larsen, Jens Knud; Arnfred, Sidse
2017-08-01
Psychological distress is a trans-diagnostic feature of mental suffering closely associated with mental disorders. Kessler's 10-item Psychological Distress Scale (K10), a scale with sound psychometric properties, is widely used in epidemiological studies. To translate and investigate whether K10 is a reliable and valid rating scale for the measurement of psychological distress in a Danish population. The translation was carried out according to official WHO translation guidelines. A sample of 100 subjects was included, 54 patients from the regional Mental Health Service (MHS) and 46 subjects with no psychiatric history. All participants were assessed with a psychiatric diagnostic interview (MINI) and handed out K10. Concurrent validity was assessed by WHO Well-being Index (WHO-5). Correlation matrix analysis was conducted for the full sample and receiver operating characteristic (ROC) curves for discriminating mental health service affiliation. Mean K10 scores differed, with decreasing levels, between inpatients and outpatient in MHS and the subjects with no psychiatric history. Factor analysis confirmed a unidimensional structure, and Cronbach's alpha and Omega showed excellent internal reliability. AUC for the K10 ROC curves showed excellent sensitivity (0.947 [0.900-0.995]), accurately differentiating mental health from non-mental health patients. The Danish K10 has the same strong internal reliability as the original English version, and scores differ between psychiatric patients in outpatient and emergency ward settings. The Danish K10 translation is authorized and freely available for download at https://www.hcp.med.harvard.edu/ncs/k6_scales.php . The utility as an instrument for clinical screening in a mental healthcare setting is supported.
Transcriptional response to hypoxic stress in melanoma and prognostic potential of GBE1 and BNIP3.
Buart, Stéphanie; Terry, Stéphane; Noman, Muhammad Z; Lanoy, Emilie; Boutros, Céline; Fogel, Paul; Dessen, Philippe; Meurice, Guillaume; Gaston-Mathé, Yann; Vielh, Philippe; Roy, Séverine; Routier, Emilie; Marty, Virginie; Ferlicot, Sophie; Legrès, Luc; Bouchtaoui, Morad El; Kamsu-Kom, Nyam; Muret, Jane; Deutsch, Eric; Eggermont, Alexander; Soria, Jean-Charles; Robert, Caroline; Chouaib, Salem
2017-12-12
Gradients of hypoxia occur in most solid tumors and cells found in hypoxic regions are associated with the most aggressive and therapy-resistant fractions of the tumor. Despite the ubiquity and importance of hypoxia responses, little is known about the variation in the global transcriptional response to hypoxia in melanoma. Using microarray technology, whole genome gene expression profiling was first performed on established melanoma cell lines. From gene set enrichment analyses, we derived a robust 35 probes signature (hypomel for HYPOxia MELanoma) associated with hypoxia-response pathways, including 26 genes up regulated, and 9 genes down regulated. The microarray data were validated by RT-qPCR for the 35 transcripts. We then validated the signature in hypoxic zones from 8 patient specimens using laser microdissection or macrodissection of Formalin fixed-paraffin-embedded (FFPE) material, followed with RT-qPCR. Moreover, a similar hypoxia-associated gene expression profile was observed using NanoString technology to analyze RNAs from FFPE melanoma tissues of a cohort of 19 patients treated with anti-PD1. Analysis of NanoString data from validation sets using Non-Negative Matrix Factorization (NMF) analysis (26 genes up regulated in hypoxia) and dual clustering (samples and genes) further revealed that the increased level of BNIP3 (Bcl-2 adenovirus E1B 19 kDa-interacting protein 3)/GBE1 (glycogen branching enzyme1) differential pair correlates with the lack of response of melanoma patients to anti-PD1 (pembrolizumab) immunotherapy. These studies suggest that through elevated glycogenic flux and induction of autophagy, hypoxia is a critical molecular program that could be considered as a prognostic factor for melanoma.
Classification based upon gene expression data: bias and precision of error rates.
Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L
2007-06-01
Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp
Feldsine, Philip; Kaur, Mandeep; Shah, Khyati; Immerman, Amy; Jucker, Markus; Lienau, Andrew
2015-01-01
Assurance GDSTM for Salmonella Tq has been validated according to the AOAC INTERNATIONAL Methods Committee Guidelines for Validation of Microbiological Methods for Food and Environmental Surfaces for the detection of selected foods and environmental surfaces (Official Method of AnalysisSM 2009.03, Performance Tested MethodSM No. 050602). The method also completed AFNOR validation (following the ISO 16140 standard) compared to the reference method EN ISO 6579. For AFNOR, GDS was given a scope covering all human food, animal feed stuff, and environmental surfaces (Certificate No. TRA02/12-01/09). Results showed that Assurance GDS for Salmonella (GDS) has high sensitivity and is equivalent to the reference culture methods for the detection of motile and non-motile Salmonella. As part of the aforementioned validations, inclusivity and exclusivity studies, stability, and ruggedness studies were also conducted. Assurance GDS has 100% inclusivity and exclusivity among the 100 Salmonella serovars and 35 non-Salmonella organisms analyzed. To add to the scope of the Assurance GDS for Salmonella method, a matrix extension study was conducted, following the AOAC guidelines, to validate the application of the method for selected spices, specifically curry powder, cumin powder, and chili powder, for the detection of Salmonella.
Kalinowski, Jarosław A.; Makal, Anna; Coppens, Philip
2011-01-01
A new method for determination of the orientation matrix of Laue X-ray data is presented. The method is based on matching of the experimental patterns of central reciprocal lattice rows projected on a unit sphere centered on the origin of the reciprocal lattice with the corresponding pattern of a monochromatic data set on the same material. This technique is applied to the complete data set and thus eliminates problems often encountered when single frames with a limited number of peaks are to be used for orientation matrix determination. Application of the method to a series of Laue data sets on organometallic crystals is described. The corresponding program is available under a Mozilla Public License-like open-source license. PMID:22199400
ERIC Educational Resources Information Center
Romero, Sonia J.; Ordoñez, Xavier G.; Ponsoda, Vincente; Revuelta, Javier
2014-01-01
Cognitive Diagnostic Models (CDMs) aim to provide information about the degree to which individuals have mastered specific attributes that underlie the success of these individuals on test items. The Q-matrix is a key element in the application of CDMs, because contains links item-attributes representing the cognitive structure proposed for solve…
Constructing and Validating a Q-Matrix for Cognitive Diagnostic Analyses of a Reading Test
ERIC Educational Resources Information Center
Li, Hongli; Suen, Hoi K.
2013-01-01
Cognitive diagnostic analyses have been advocated as methods that allow an assessment to function as a formative assessment to inform instruction. To use this approach, it is necessary to first identify the skills required for each item in the test, known as a Q-matrix. However, because the construct being tested and the underlying cognitive…
Butler, G S; Overall, C M
2007-01-01
We illustrate the use of quantitative proteomics, namely isotope-coded affinity tag labelling and tandem mass spectrometry, to assess the targets and effects of the blockade of matrix metalloproteinases by an inhibitor drug in a breast cancer cell culture system. Treatment of MT1-MMP-transfected MDA-MB-231 cells with AG3340 (Prinomastat) directly affected the processing a multitude of matrix metalloproteinase substrates, and indirectly altered the expression of an array of other proteins with diverse functions. Therefore, broad spectrum blockade of MMPs has wide-ranging biological consequences. In this human breast cancer cell line, secreted substrates accumulated uncleaved in the conditioned medium and plasma membrane protein substrates were retained on the cell surface, due to reduced processing and shedding of these proteins (cell surface receptors, growth factors and bioactive molecules) to the medium in the presence of the matrix metalloproteinase inhibitor. Hence, proteomic investigation of drug-perturbed cellular proteomes can identify new protease substrates and at the same time provides valuable information for target validation, drug efficacy and potential side effects prior to commitment to clinical trials.
An ambiguity of information content and error in an ill-posed satellite inversion
NASA Astrophysics Data System (ADS)
Koner, Prabhat
According to Rodgers (2000, stochastic approach), the averaging kernel (AK) is the representational matrix to understand the information content in a scholastic inversion. On the other hand, in deterministic approach this is referred to as model resolution matrix (MRM, Menke 1989). The analysis of AK/MRM can only give some understanding of how much regularization is imposed on the inverse problem. The trace of the AK/MRM matrix, which is the so-called degree of freedom from signal (DFS; stochastic) or degree of freedom in retrieval (DFR; deterministic). There are no physical/mathematical explanations in the literature: why the trace of the matrix is a valid form to calculate this quantity? We will present an ambiguity between information and error using a real life problem of SST retrieval from GOES13. The stochastic information content calculation is based on the linear assumption. The validity of such mathematics in satellite inversion will be questioned because it is based on the nonlinear radiative transfer and ill-conditioned inverse problems. References: Menke, W., 1989: Geophysical data analysis: discrete inverse theory. San Diego academic press. Rodgers, C.D., 2000: Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific.
Beam-tracing model for predicting sound fields in rooms with multilayer bounding surfaces
NASA Astrophysics Data System (ADS)
Wareing, Andrew; Hodgson, Murray
2005-10-01
This paper presents the development of a wave-based room-prediction model for predicting steady-state sound fields in empty rooms with specularly reflecting, multilayer surfaces. A triangular beam-tracing model with phase, and a transfer-matrix approach to model the surfaces, were involved. Room surfaces were modeled as multilayers of fluid, solid, or porous materials. Biot theory was used in the transfer-matrix formulation of the porous layer. The new model consisted of the transfer-matrix model integrated into the beam-tracing algorithm. The transfer-matrix model was validated by comparing predictions with those by theory, and with experiment. The test surfaces were a glass plate, double drywall panels, double steel panels, a carpeted floor, and a suspended-acoustical ceiling. The beam-tracing model was validated in the cases of three idealized room configurations-a small office, a corridor, and a small industrial workroom-with simple boundary conditions. The number of beams, the reflection order, and the frequency resolution required to obtain accurate results were investigated. Beam-tracing predictions were compared with those by a method-of-images model with phase. The model will be used to study sound fields in rooms with local- or extended-reaction multilayer surfaces.
Validating Analytical Protocols to Determine Selected Pesticides and PCBs Using Routine Samples.
Pindado Jiménez, Oscar; García Alonso, Susana; Pérez Pastor, Rosa María
2017-01-01
This study aims at providing recommendations concerning the validation of analytical protocols by using routine samples. It is intended to provide a case-study on how to validate the analytical methods in different environmental matrices. In order to analyze the selected compounds (pesticides and polychlorinated biphenyls) in two different environmental matrices, the current work has performed and validated two analytical procedures by GC-MS. A description is given of the validation of the two protocols by the analysis of more than 30 samples of water and sediments collected along nine months. The present work also scopes the uncertainty associated with both analytical protocols. In detail, uncertainty of water sample was performed through a conventional approach. However, for the sediments matrices, the estimation of proportional/constant bias is also included due to its inhomogeneity. Results for the sediment matrix are reliable, showing a range 25-35% of analytical variability associated with intermediate conditions. The analytical methodology for the water matrix determines the selected compounds with acceptable recoveries and the combined uncertainty ranges between 20 and 30%. Analyzing routine samples is rarely applied to assess trueness of novel analytical methods and up to now this methodology was not focused on organochlorine compounds in environmental matrices.
Thermal Expansion Behavior of Hot-Pressed Engineered Matrices
NASA Technical Reports Server (NTRS)
Raj, S. V.
2016-01-01
Advanced engineered matrix composites (EMCs) require that the coefficient of thermal expansion (CTE) of the engineered matrix (EM) matches those of the fiber reinforcements as closely as possible in order to reduce thermal compatibility strains during heating and cooling of the composites. The present paper proposes a general concept for designing suitable matrices for long fiber reinforced composites using a rule of mixtures (ROM) approach to minimize the global differences in the thermal expansion mismatches between the fibers and the engineered matrix. Proof-of-concept studies were conducted to demonstrate the validity of the concept.
Lozano, Ana; Rajski, Łukasz; Belmonte-Valles, Noelia; Uclés, Ana; Uclés, Samanta; Mezcua, Milagros; Fernández-Alba, Amadeo R
2012-12-14
This paper presents the validation of a modified QuEChERS method in four matrices - green tea, red tea, black tea and chamomile. The experiments were carried out using blank samples spiked with a solution of 86 pesticides (insecticides, fungicides and herbicides) at four levels - 10, 25, 50 and 100 μg/kg. The samples were extracted according to the citrate QuEChERS protocol; however, to reduce the amount of coextracted matrix compounds, calcium chloride was employed instead of magnesium sulphate in the clean-up step. The samples were analysed by LC-MS/MS and GC-MS/MS. Included in the scope of validation were: recovery, linearity, matrix effects, limits of detection and quantitation as well as intra-day and inter-day precision. The validated method was used in a real sample survey carried out on 75 samples purchased in ten different countries. In all matrices, recoveries of the majority of compounds were in the 70-120% range and were characterised by precision lower than 20%. In 85% of pesticide/matrix combinations the analytes can be detected quantitatively by the proposed method at the European Union Maximum Residue Level. The analysis of the real samples revealed that large number of teas and chamomiles sold in the European Union contain pesticides whose usage is not approved and also pesticides in concentrations above the EU MRLs. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hamaker, J. P.
2006-09-01
Context: .This is Paper V in a series on polarimetric aperture synthesis based on the algebra of 2×2 matrices. Aims: .It validates the matrix self-calibration theory of the preceding Paper IV and outlines the algorithmic methods that had to be developed for its application. Methods: .New avenues of polarimetric self-calibration opened up in Paper IV are explored by processing a simulated observation. To focus on the polarimetric issues, it is set up so as to sidestep some of the common complications of aperture synthesis, yet properly represent physical conditions. In addition to a representative collection of observing errors, the simulated instrument includes strongly varying Faraday rotation and antennas with unequal feeds. The selfcal procedure is described in detail, including aspects in which it differs from the scalar case, and its effects are demonstrated with a number of intermediate image results. Results: .The simulation's outcome is in full agreement with the theory. The nonlinear matrix equations for instrumental parameters are readily solved by iteration; a convergence problem is easily remedied with a new ancillary algorithm. Instrumental effects are cleanly separated from source properties without reference to changes in parallactic rotation during the observation. Polarimetric images of high purity and dynamic range result. As theory predicts, polarimetric errors that are common to all sources inevitably remain; prior knowledge of the statistics of linear and circular polarization in a typical observed field can be applied to eliminate most of them. Conclusions: .The paper conclusively demonstrates that matrix selfcal per se is a viable method that may foster substantial advancement in the art of radio polarimetry. For its application in real observations, a number of issues must be resolved that matrix selfcal has in common with its scalar sibling, such as the treatment of extended sources and the familiar sampling and aliasing problems. The close analogy between scalar interferometry and its matrix-based generalisation suggests that one may apply well-developed methods of scalar interferometry. Marrying these methods to those of this paper will require a significant investment in new software. Two such developments are known to be foreseen or underway.
VON Korff, Modest; Fink, Tobias; Sander, Thomas
2017-01-01
A new computational method is presented to extract disease patterns from heterogeneous and text-based data. For this study, 22 million PubMed records were mined for co-occurrences of gene name synonyms and disease MeSH terms. The resulting publication counts were transferred into a matrix Mdata. In this matrix, a disease was represented by a row and a gene by a column. Each field in the matrix represented the publication count for a co-occurring disease-gene pair. A second matrix with identical dimensions Mrelevance was derived from Mdata. To create Mrelevance the values from Mdata were normalized. The normalized values were multiplied by the column-wise calculated Gini coefficient. This multiplication resulted in a relevance estimator for every gene in relation to a disease. From Mrelevance the similarities between all row vectors were calculated. The resulting similarity matrix Srelevance related 5,000 diseases by the relevance estimators calculated for 15,000 genes. Three diseases were analyzed in detail for the validation of the disease patterns and the relevant genes. Cytoscape was used to visualize and to analyze Mrelevance and Srelevance together with the genes and diseases. Summarizing the results, it can be stated that the relevance estimator introduced here was able to detect valid disease patterns and to identify genes that encoded key proteins and potential targets for drug discovery projects.
A path-oriented matrix-based knowledge representation system
NASA Technical Reports Server (NTRS)
Feyock, Stefan; Karamouzis, Stamos T.
1993-01-01
Experience has shown that designing a good representation is often the key to turning hard problems into simple ones. Most AI (Artificial Intelligence) search/representation techniques are oriented toward an infinite domain of objects and arbitrary relations among them. In reality much of what needs to be represented in AI can be expressed using a finite domain and unary or binary predicates. Well-known vector- and matrix-based representations can efficiently represent finite domains and unary/binary predicates, and allow effective extraction of path information by generalized transitive closure/path matrix computations. In order to avoid space limitations a set of abstract sparse matrix data types was developed along with a set of operations on them. This representation forms the basis of an intelligent information system for representing and manipulating relational data.
Matrix effect and recovery terminology issues in regulated drug bioanalysis.
Huang, Yong; Shi, Robert; Gee, Winnie; Bonderud, Richard
2012-02-01
Understanding the meaning of the terms used in the bioanalytical method validation guidance is essential for practitioners to implement best practice. However, terms that have several meanings or that have different interpretations exist within bioanalysis, and this may give rise to differing practices. In this perspective we discuss an important but often confusing term - 'matrix effect (ME)' - in regulated drug bioanalysis. The ME can be interpreted as either the ionization change or the measurement bias of the method caused by the nonanalyte matrix. The ME definition dilemma makes its evaluation challenging. The matrix factor is currently used as a standard method for evaluation of ionization changes caused by the matrix in MS-based methods. Standard additions to pre-extraction samples have been suggested to evaluate the overall effects of a matrix from different sources on the analytical system, because it covers ionization variation and extraction recovery variation. We also provide our personal views on the term 'recovery'.
Jelsch, C
2001-09-01
The normal matrix in the least-squares refinement of macromolecules is very sparse when the resolution reaches atomic and subatomic levels. The elements of the normal matrix, related to coordinates, thermal motion and charge-density parameters, have a global tendency to decrease rapidly with the interatomic distance between the atoms concerned. For instance, in the case of the protein crambin at 0.54 A resolution, the elements are reduced by two orders of magnitude for distances above 1.5 A. The neglect a priori of most of the normal-matrix elements according to a distance criterion represents an approximation in the refinement of macromolecules, which is particularly valid at very high resolution. The analytical expressions of the normal-matrix elements, which have been derived for the coordinates and the thermal parameters, show that the degree of matrix sparsity increases with the diffraction resolution and the size of the asymmetric unit.
Panuwet, Parinya; Hunter, Ronald E.; D’Souza, Priya E.; Chen, Xianyu; Radford, Samantha A.; Cohen, Jordan R.; Marder, M. Elizabeth; Kartavenka, Kostya; Ryan, P. Barry; Barr, Dana Boyd
2015-01-01
The ability to quantify levels of target analytes in biological samples accurately and precisely, in biomonitoring, involves the use of highly sensitive and selective instrumentation such as tandem mass spectrometers and a thorough understanding of highly variable matrix effects. Typically, matrix effects are caused by co-eluting matrix components that alter the ionization of target analytes as well as the chromatographic response of target analytes, leading to reduced or increased sensitivity of the analysis. Thus, before the desired accuracy and precision standards of laboratory data are achieved, these effects must be characterized and controlled. Here we present our review and observations of matrix effects encountered during the validation and implementation of tandem mass spectrometry-based analytical methods. We also provide systematic, comprehensive laboratory strategies needed to control challenges posed by matrix effects in order to ensure delivery of the most accurate data for biomonitoring studies assessing exposure to environmental toxicants. PMID:25562585
Algorithms for Solvents and Spectral Factors of Matrix Polynomials
1981-01-01
spectral factors of matrix polynomials LEANG S. SHIEHt, YIH T. TSAYt and NORMAN P. COLEMANt A generalized Newton method , based on the contracted gradient...of a matrix poly- nomial, is derived for solving the right (left) solvents and spectral factors of matrix polynomials. Two methods of selecting initial...estimates for rapid convergence of the newly developed numerical method are proposed. Also, new algorithms for solving complete sets of the right
Matrix isolation studies of hydrogen bonding - An historical perspective
NASA Astrophysics Data System (ADS)
Barnes, Austin J.
2018-07-01
An historical introduction sets matrix isolation in perspective with other spectroscopic techniques for studying hydrogen-bonded complexes. This is followed by detailed accounts of various aspects of hydrogen-bonded complexes that have been studied using matrix isolation spectroscopy: Matrix effects: stabilisation of complexes. Strongly hydrogen-bonded molecular complexes: the vibrational correlation diagram. Anomalous spectra: the Ratajczak-Yaremko model. Metastable complexes. Csbnd H hydrogen bonding and blue shifting hydrogen bonds.
Fong, Jiunn N C; Yildiz, Fitnat H
2015-04-01
Proteinaceous components of the biofilm matrix include secreted extracellular proteins, cell surface adhesins, and protein subunits of cell appendages such as flagella and pili. Biofilm matrix proteins play diverse roles in biofilm formation and dissolution. They are involved in attaching cells to surfaces, stabilizing the biofilm matrix via interactions with exopolysaccharide and nucleic acid components, developing three-dimensional biofilm architectures, and dissolving biofilm matrix via enzymatic degradation of polysaccharides, proteins, and nucleic acids. In this article, we will review functions of matrix proteins in a selected set of microorganisms, studies of the matrix proteomes of Vibrio cholerae and Pseudomonas aeruginosa, and roles of outer membrane vesicles and of nucleoid-binding proteins in biofilm formation.
Nurse staffing levels and outcomes - mining the UK national data sets for insight.
Leary, Alison; Tomai, Barbara; Swift, Adrian; Woodward, Andrew; Hurst, Keith
2017-04-18
Purpose Despite the generation of mass data by the nursing workforce, determining the impact of the contribution to patient safety remains challenging. Several cross-sectional studies have indicated a relationship between staffing and safety. The purpose of this paper is to uncover possible associations and explore if a deeper understanding of relationships between staffing and other factors such as safety could be revealed within routinely collected national data sets. Design/methodology/approach Two longitudinal routinely collected data sets consisting of 30 years of UK nurse staffing data and seven years of National Health Service (NHS) benchmark data such as survey results, safety and other indicators were used. A correlation matrix was built and a linear correlation operation was applied (Pearson product-moment correlation coefficient). Findings A number of associations were revealed within both the UK staffing data set and the NHS benchmarking data set. However, the challenges of using these data sets soon became apparent. Practical implications Staff time and effort are required to collect these data. The limitations of these data sets include inconsistent data collection and quality. The mode of data collection and the itemset collected should be reviewed to generate a data set with robust clinical application. Originality/value This paper revealed that relationships are likely to be complex and non-linear; however, the main contribution of the paper is the identification of the limitations of routinely collected data. Much time and effort is expended in collecting this data; however, its validity, usefulness and method of routine national data collection appear to require re-examination.
Comparative test on several forms of background error covariance in 3DVar
NASA Astrophysics Data System (ADS)
Shao, Aimei
2013-04-01
The background error covariance matrix (Hereinafter referred to as B matrix) plays an important role in the three-dimensional variational (3DVar) data assimilation method. However, it is difficult to get B matrix accurately because true atmospheric state is unknown. Therefore, some methods were developed to estimate B matrix (e.g. NMC method, innovation analysis method, recursive filters, and ensemble method such as EnKF). Prior to further development and application of these methods, the function of several B matrixes estimated by these methods in 3Dvar is worth studying and evaluating. For this reason, NCEP reanalysis data and forecast data are used to test the effectiveness of the several B matrixes with VAF (Huang, 1999) method. Here the NCEP analysis is treated as the truth and in this case the forecast error is known. The data from 2006 to 2007 is used as the samples to estimate B matrix and the data in 2008 is used to verify the assimilation effects. The 48h and 24h forecast valid at the same time is used to estimate B matrix with NMC method. B matrix can be represented by a correlation part (a non-diagonal matrix) and a variance part (a diagonal matrix of variances). Gaussian filter function as an approximate approach is used to represent the variation of correlation coefficients with distance in numerous 3DVar systems. On the basis of the assumption, the following several forms of B matrixes are designed and test with VAF in the comparative experiments: (1) error variance and the characteristic lengths are fixed and setted to their mean value averaged over the analysis domain; (2) similar to (1), but the mean characteristic lengths reduce to 50 percent for the height and 60 percent for the temperature of the original; (3) similar to (2), but error variance calculated directly by the historical data is space-dependent; (4) error variance and characteristic lengths are all calculated directly by the historical data; (5) B matrix is estimated directly by the historical data; (6) similar to (5), but a localization process is performed; (7) B matrix is estimated by NMC method but error variance is reduced by 1.7 times in order that the value is close to that calculated from the true forecast error samples; (8) similar to (7), but the localization similar to (6) is performed. Experimental results with the different B matrixes show that for the Gaussian-type B matrix the characteristic lengths calculated from the true error samples don't bring a good analysis results. However, the reduced characteristic lengths (about half of the original one) can lead to a good analysis. If the B matrix estimated directly from the historical data is used in 3DVar, the assimilation effect can not reach to the best. The better assimilation results are generated with the application of reduced characteristic length and localization. Even so, it hasn't obvious advantage compared with Gaussian-type B matrix with the optimal characteristic length. It implies that the Gaussian-type B matrix, widely used for operational 3DVar system, can get a good analysis with the appropriate characteristic lengths. The crucial problem is how to determine the appropriate characteristic lengths. (This work is supported by the National Natural Science Foundation of China (41275102, 40875063), and the Fundamental Research Funds for the Central Universities (lzujbky-2010-9) )
Aguado, Brian A.; Caffe, Jordan R.; Nanavati, Dhaval; Rao, Shreyas S.; Bushnell, Grace G.; Azarin, Samira M.; Shea, Lonnie D.
2016-01-01
Metastatic tumor cells colonize the pre-metastatic niche, which is a complex microenvironment consisting partially of extracellular matrix (ECM) proteins. We sought to identify and validate novel contributors to tumor cell colonization using ECM coated poly(ε-caprolactone) (PCL) scaffolds as mimics of the pre-metastatic niche. Utilizing orthotopic breast cancer mouse models, fibronectin and collagen IV-coated scaffolds implanted in the subcutaneous space captured colonizing tumor cells, showing a greater than 2-fold increase in tumor cell accumulation at the implant site compared to uncoated scaffolds. As a strategy to identify additional ECM colonization contributors, decellularized matrix (DCM) from lungs and livers containing metastatic tumors were characterized. In vitro, metastatic cell adhesion was increased on DCM coatings from diseased organs relative to healthy DCM. Furthermore, in vivo implantations of diseased DCM-coated scaffolds had increased tumor cell colonization relative to healthy DCM coatings. Mass-spectrometry proteomics was performed on healthy and diseased DCM to identify candidates associated with colonization. Myeloperoxidase was identified as abundantly present in diseased organs and validated as a contributor to colonization using myeloperoxidase-coated scaffold implants. This work identified novel ECM proteins associated with colonization using decellularization and proteomics techniques and validated candidates using a scaffold to mimic the pre-metastatic niche. PMID:26844426
Knapen, Lotte M; Beer, Yvo de; Brüggemann, Roger J M; Stolk, Leo M; Vries, Frank de; Tjan-Heijnen, Vivianne C G; Erp, Nielka P van; Croes, Sander
2018-02-05
While the therapeutic drug monitoring (TDM) of everolimus has been routinely performed for over 10 years in solid organ transplantation medicine, in order to optimize the balance between effectiveness and toxicity, it is yet uncommon in the treatment of malignancies. The aim of this study was to develop and validate a bioanalytical method to quantify everolimus in dried blood spots (DBS) to facilitate TDM for the oncology outpatient setting. The hematocrit effect of everolimus was investigated. An 7.5mm disk from the central part of the DBS was punched, followed by the extraction of everolimus from the DBS by methanol/acetonitrile (80/20%) spiked with deuterium-labelled everolimus as internal standard. Subsequently, everolimus was separated and analyzed using ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS). The UPLC-MS/MS method was validated according to the European Medicine Agency (EMA) guideline. Everolimus concentrations could be quantified over the range of 3-75μg/L. The intra- and inter-assay precision and accuracy of the method were shown to be acceptable (coefficient of variation ≤10.7% and relative error ≤4.4%, respectively). The matrix effects appeared to be influenced by the hematocrit effect. The hematocrit effect was tested in a range of 0.20-0.50L/L, at which hematocrit accuracy and precision were satisfactory at values ≥0.25L/L. However, at 0.20L/L hematocrit in combination with high everolimus concentrations of 20 and 40μg/L, the precision was adequate (≤7.4%), but the accuracy was >15% of the nominal concentration. Everolimus was stable in DBS for at least 80days at 2-8°C. Given these results, the everolimus DBS method has been successfully developed and validated. Special attention is necessary for cancer patients with both a 0.20L/L hematocrit in combination with everolimus concentrations ≥20μg/L. A clinical validation for the use of everolimus DBS in cancer patients is currently being undertaken. Copyright © 2017 Elsevier B.V. All rights reserved.
Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E
2017-07-01
According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.
A Construct Validity Study of Clinical Competence: A Multitrait Multimethod Matrix Approach
ERIC Educational Resources Information Center
Baig, Lubna; Violato, Claudio; Crutcher, Rodney
2010-01-01
Introduction: The purpose of the study was to adduce evidence for estimating the construct validity of clinical competence measured through assessment instruments used for high-stakes examinations. Methods: Thirty-nine international physicians (mean age = 41 + 6.5 y) participated in high-stakes examination and 3-month supervised clinical practice…
The Construct Validation of Tests of Communicative Competence.
ERIC Educational Resources Information Center
Palmer, Adrian S., Ed.; And Others
This collection, including the proceedings of a colloquium at TESOL 1979, includes the following papers: (1) "Classification of Oral Proficiency Tests," by H. Madsen and R. Jones; (2) "A Theoretical Framework for Communicative Competence," by M. Canale and M. Swain; (3) "Beyond Faith and Face Validity: The Multitrait-Multimethod Matrix and the…
Trivedi, Hari; Mesterhazy, Joseph; Laguna, Benjamin; Vu, Thienkhai; Sohn, Jae Ho
2018-04-01
Magnetic resonance imaging (MRI) protocoling can be time- and resource-intensive, and protocols can often be suboptimal dependent upon the expertise or preferences of the protocoling radiologist. Providing a best-practice recommendation for an MRI protocol has the potential to improve efficiency and decrease the likelihood of a suboptimal or erroneous study. The goal of this study was to develop and validate a machine learning-based natural language classifier that can automatically assign the use of intravenous contrast for musculoskeletal MRI protocols based upon the free-text clinical indication of the study, thereby improving efficiency of the protocoling radiologist and potentially decreasing errors. We utilized a deep learning-based natural language classification system from IBM Watson, a question-answering supercomputer that gained fame after challenging the best human players on Jeopardy! in 2011. We compared this solution to a series of traditional machine learning-based natural language processing techniques that utilize a term-document frequency matrix. Each classifier was trained with 1240 MRI protocols plus their respective clinical indications and validated with a test set of 280. Ground truth of contrast assignment was obtained from the clinical record. For evaluation of inter-reader agreement, a blinded second reader radiologist analyzed all cases and determined contrast assignment based on only the free-text clinical indication. In the test set, Watson demonstrated overall accuracy of 83.2% when compared to the original protocol. This was similar to the overall accuracy of 80.2% achieved by an ensemble of eight traditional machine learning algorithms based on a term-document matrix. When compared to the second reader's contrast assignment, Watson achieved 88.6% agreement. When evaluating only the subset of cases where the original protocol and second reader were concordant (n = 251), agreement climbed further to 90.0%. The classifier was relatively robust to spelling and grammatical errors, which were frequent. Implementation of this automated MR contrast determination system as a clinical decision support tool may save considerable time and effort of the radiologist while potentially decreasing error rates, and require no change in order entry or workflow.
Andersen, Wendy C; Casey, Christine R; Schneider, Marilyn J; Turnipseed, Sherri B
2015-01-01
Prior to conducting a collaborative study of AOAC First Action 2012.25 LC-MS/MS analytical method for the determination of residues of three triphenylmethane dyes (malachite green, crystal violet, and brilliant green) and their metabolites (leucomalachite green and leucocrystal violet) in seafood, a single-laboratory validation of method 2012.25 was performed to expand the scope of the method to other seafood matrixes including salmon, catfish, tilapia, and shrimp. The validation included the analysis of fortified and incurred residues over multiple weeks to assess analyte stability in matrix at -80°C, a comparison of calibration methods over the range 0.25 to 4 μg/kg, study of matrix effects for analyte quantification, and qualitative identification of targeted analytes. Method accuracy ranged from 88 to 112% with 13% RSD or less for samples fortified at 0.5, 1.0, and 2.0 μg/kg. Analyte identification and determination limits were determined by procedures recommended both by the U. S. Food and Drug Administration and the European Commission. Method detection limits and decision limits ranged from 0.05 to 0.24 μg/kg and 0.08 to 0.54 μg/kg, respectively. AOAC First Action Method 2012.25 with an extracted matrix calibration curve and internal standard correction is suitable for the determination of triphenylmethane dyes and leuco metabolites in salmon, catfish, tilapia, and shrimp by LC-MS/MS at a residue determination level of 0.5 μg/kg or below.
Three Interpretations of the Matrix Equation Ax = b
ERIC Educational Resources Information Center
Larson, Christine; Zandieh, Michelle
2013-01-01
Many of the central ideas in an introductory undergraduate linear algebra course are closely tied to a set of interpretations of the matrix equation Ax = b (A is a matrix, x and b are vectors): linear combination interpretations, systems interpretations, and transformation interpretations. We consider graphic and symbolic representations for each,…
Evidence-Based Practice: A Matrix for Predicting Phonological Generalization
ERIC Educational Resources Information Center
Gierut, Judith A.; Hulse, Lauren E.
2010-01-01
This paper describes a matrix for clinical use in the selection of phonological treatment targets to induce generalization, and in the identification of probe sounds to monitor during the course of intervention. The matrix appeals to a set of factors that have been shown to promote phonological generalization in the research literature, including…
Sparse Gaussian elimination with controlled fill-in on a shared memory multiprocessor
NASA Technical Reports Server (NTRS)
Alaghband, Gita; Jordan, Harry F.
1989-01-01
It is shown that in sparse matrices arising from electronic circuits, it is possible to do computations on many diagonal elements simultaneously. A technique for obtaining an ordered compatible set directly from the ordered incompatible table is given. The ordering is based on the Markowitz number of the pivot candidates. This technique generates a set of compatible pivots with the property of generating few fills. A novel heuristic algorithm is presented that combines the idea of an order-compatible set with a limited binary tree search to generate several sets of compatible pivots in linear time. An elimination set for reducing the matrix is generated and selected on the basis of a minimum Markowitz sum number. The parallel pivoting technique presented is a stepwise algorithm and can be applied to any submatrix of the original matrix. Thus, it is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds. Parameters are suggested to obtain a balance between parallelism and fill-ins. Results of applying the proposed algorithms on several large application matrices using the HEP multiprocessor (Kowalik, 1985) are presented and analyzed.
Cloke, Jonathan; Clark, Dorn; Radcliff, Roy; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko
2014-01-01
The Thermo Scientific SureTect Salmonella species Assay is a new real-time PCR assay for the detection of Salmonellae in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested Methods program to validate the SureTect Salmonella species Assay in comparison to the reference method detailed in International Organization for Standardization 6579:2002 in a variety of food matrixes, namely, raw ground beef, raw chicken breast, raw ground pork, fresh bagged lettuce, pork frankfurters, nonfat dried milk powder, cooked peeled shrimp, pasteurized liquid whole egg, ready-to-eat meal containing beef, and stainless steel surface samples. With the exception of liquid whole egg and fresh bagged lettuce, which were tested in-house, all matrixes were tested by Marshfield Food Safety, Marshfield, WI, on behalf of Thermo Fisher Scientific. In addition, three matrixes (pork frankfurters, lettuce, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled laboratory study by the University of Guelph, Canada. No significant difference by probability of detection or McNemars Chi-squared statistical analysis was found between the candidate or reference methods for any of the food matrixes or environmental surface samples tested during the validation study. Inclusivity and exclusivity testing was conducted with 117 and 36 isolates, respectively, which demonstrated that the SureTect Salmonella species Assay was able to detect all the major groups of Salmonella enterica subspecies enterica (e.g., Typhimurium) and the less common subspecies of S. enterica (e.g., arizoniae) and the rarely encountered S. bongori. None of the exclusivity isolates analyzed were detected by the SureTect Salmonella species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation (enrichment time and temperature, and lysis temperature), which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.
Evaluation of the Thermo Scientific™ SureTect™ Salmonella species Assay.
Cloke, Jonathan; Clark, Dorn; Radcliff, Roy; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko
2014-03-01
The Thermo Scientific™ SureTect™ Salmonella species Assay is a new real-time PCR assay for the detection of Salmonellae in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested MethodsSM program to validate the SureTect Salmonella species Assay in comparison to the reference method detailed in International Organization for Standardization 6579:2002 in a variety of food matrixes, namely, raw ground beef, raw chicken breast, raw ground pork, fresh bagged lettuce, pork frankfurters, nonfat dried milk powder, cooked peeled shrimp, pasteurized liquid whole egg, ready-to-eat meal containing beef, and stainless steel surface samples. With the exception of liquid whole egg and fresh bagged lettuce, which were tested in-house, all matrixes were tested by Marshfield Food Safety, Marshfield, WI, on behalf of Thermo Fisher Scientific. In addition, three matrixes (pork frankfurters, lettuce, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled laboratory study by the University of Guelph, Canada. No significant difference by probability of detection or McNemars Chi-squared statistical analysis was found between the candidate or reference methods for any of the food matrixes or environmental surface samples tested during the validation study. Inclusivity and exclusivity testing was conducted with 117 and 36 isolates, respectively, which demonstrated that the SureTect Salmonella species Assay was able to detect all the major groups of Salmonella enterica subspecies enterica (e.g., Typhimurium) and the less common subspecies of S. enterica (e.g., arizoniae) and the rarely encountered S. bongori. None of the exclusivity isolates analyzed were detected by the SureTect Salmonella species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation (enrichment time and temperature, and lysis temperature), which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.
Introducing Explorer of Taxon Concepts with a case study on spider measurement matrix building.
Cui, Hong; Xu, Dongfang; Chong, Steven S; Ramirez, Martin; Rodenhausen, Thomas; Macklin, James A; Ludäscher, Bertram; Morris, Robert A; Soto, Eduardo M; Koch, Nicolás Mongiardino
2016-11-17
Taxonomic descriptions are traditionally composed in natural language and published in a format that cannot be directly used by computers. The Exploring Taxon Concepts (ETC) project has been developing a set of web-based software tools that convert morphological descriptions published in telegraphic style to character data that can be reused and repurposed. This paper introduces the first semi-automated pipeline, to our knowledge, that converts morphological descriptions into taxon-character matrices to support systematics and evolutionary biology research. We then demonstrate and evaluate the use of the ETC Input Creation - Text Capture - Matrix Generation pipeline to generate body part measurement matrices from a set of 188 spider morphological descriptions and report the findings. From the given set of spider taxonomic publications, two versions of input (original and normalized) were generated and used by the ETC Text Capture and ETC Matrix Generation tools. The tools produced two corresponding spider body part measurement matrices, and the matrix from the normalized input was found to be much more similar to a gold standard matrix hand-curated by the scientist co-authors. Special conventions utilized in the original descriptions (e.g., the omission of measurement units) were attributed to the lower performance of using the original input. The results show that simple normalization of the description text greatly increased the quality of the machine-generated matrix and reduced edit effort. The machine-generated matrix also helped identify issues in the gold standard matrix. ETC Text Capture and ETC Matrix Generation are low-barrier and effective tools for extracting measurement values from spider taxonomic descriptions and are more effective when the descriptions are self-contained. Special conventions that make the description text less self-contained challenge automated extraction of data from biodiversity descriptions and hinder the automated reuse of the published knowledge. The tools will be updated to support new requirements revealed in this case study.
NASA Astrophysics Data System (ADS)
Grujicic, Mica; Galgalikar, R.; Snipes, J. S.; Ramaswami, S.
2016-05-01
Material constitutive models for creep deformation and creep rupture of the SiC/SiC ceramic-matrix composites (CMCs) under general three-dimensional stress states have been developed and parameterized using one set of available experimental data for the effect of stress magnitude and temperature on the time-dependent creep deformation and rupture. To validate the models developed, another set of available experimental data was utilized for each model. The models were subsequently implemented in a user-material subroutine and coupled with a commercial finite element package in order to enable computational analysis of the performance and durability of CMC components used in high-temperature high-stress applications, such as those encountered in gas-turbine engines. In the last portion of the work, the problem of creep-controlled contact of a gas-turbine engine blade with the shroud is investigated computationally. It is assumed that the blade is made of the SiC/SiC CMC, and that the creep behavior of this material can be accounted for using the material constitutive models developed in the present work. The results clearly show that the blade-tip/shroud clearance decreases and ultimately becomes zero (the condition which must be avoided) as a function of time. In addition, the analysis revealed that if the blade is trimmed at its tip to enable additional creep deformation before blade-tip/shroud contact, creep-rupture conditions can develop in the region of the blade adjacent to its attachment to the high-rotational-speed hub.
Development and Validation of a Shear Punch Test Fixture
2013-08-01
composites (MMC) manufactured by friction stir processing (FSP) that are being developed as part of a Technology Investment Fund (TIF) project, as the...leading a team of government departments and academics to develop a friction stir processing (FSP) based procedure to create metal matrix composite... friction stir process to fabricate surface metal matrix composites in aluminum alloys for potential application in light armoured vehicles. The
Estimation of Dynamic Sparse Connectivity Patterns From Resting State fMRI.
Cai, Biao; Zille, Pascal; Stephen, Julia M; Wilson, Tony W; Calhoun, Vince D; Wang, Yu Ping
2018-05-01
Functional connectivity (FC) estimated from functional magnetic resonance imaging (fMRI) time series, especially during resting state periods, provides a powerful tool to assess human brain functional architecture in health, disease, and developmental states. Recently, the focus of connectivity analysis has shifted toward the subnetworks of the brain, which reveals co-activating patterns over time. Most prior works produced a dense set of high-dimensional vectors, which are hard to interpret. In addition, their estimations to a large extent were based on an implicit assumption of spatial and temporal stationarity throughout the fMRI scanning session. In this paper, we propose an approach called dynamic sparse connectivity patterns (dSCPs), which takes advantage of both matrix factorization and time-varying fMRI time series to improve the estimation power of FC. The feasibility of analyzing dynamic FC with our model is first validated through simulated experiments. Then, we use our framework to measure the difference between young adults and children with real fMRI data set from the Philadelphia Neurodevelopmental Cohort (PNC). The results from the PNC data set showed significant FC differences between young adults and children in four different states. For instance, young adults had reduced connectivity between the default mode network and other subnetworks, as well as hyperconnectivity within the visual system in states 1 and 3, and hypoconnectivity in state 2. Meanwhile, they exhibited temporal correlation patterns that changed over time within functional subnetworks. In addition, the dSCPs model indicated that older people tend to spend more time within a relatively connected FC pattern. Overall, the proposed method provides a valid means to assess dynamic FC, which could facilitate the study of brain networks.
Proteomic study of benign and malignant pleural effusion.
Li, Hongqing; Tang, Zhonghao; Zhu, Huili; Ge, Haiyan; Cui, Shilei; Jiang, Weiping
2016-06-01
Lung adenocarcinoma can easily cause malignant pleural effusion which was difficult to discriminate from benign pleural effusion. Now there was no biomarker with high sensitivity and specificity for the malignant pleural effusion. This study used proteomics technology to acquire and analyze the protein profiles of the benign and malignant pleural effusion, to seek useful protein biomarkers with diagnostic value and to establish the diagnostic model. We chose the weak cationic-exchanger magnetic bead (WCX-MB) to purify peptides in the pleural effusion, used matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) to obtain peptide expression profiles from the benign and malignant pleural effusion samples, established and validated the diagnostic model through a genetic algorithm (GA) and finally identified the most promising protein biomarker. A GA diagnostic model was established with spectra of 3930.9 and 2942.8 m/z in the training set including 25 malignant pleural effusion and 26 benign pleural effusion samples, yielding both 100 % sensitivity and 100 % specificity. The accuracy of diagnostic prediction was validated in the independent testing set with 58 malignant pleural effusion and 34 benign pleural effusion samples. Blind evaluation was as follows: the sensitivity was 89.6 %, specificity 88.2 %, PPV 92.8 %, NPV 83.3 % and accuracy 89.1 % in the independent testing set. The most promising peptide biomarker was identified successfully: Isoform 1 of caspase recruitment domain-containing protein 9 (CARD9), with 3930.9 m/z, was decreased in the malignant pleural effusion. This model is suitable to discriminate benign and malignant pleural effusion and CARD9 can be used as a new peptide biomarker.
Seifert, Gabriel J; Seifert, Michael; Kulemann, Birte; Holzner, Philipp A; Glatz, Torben; Timme, Sylvia; Sick, Olivia; Höppner, Jens; Hopt, Ulrich T; Marjanovic, Goran
2014-01-01
This investigation focuses on the physiological characteristics of gene transcription of intestinal tissue following anastomosis formation. In eight rats, end-to-end ileo-ileal anastomoses were performed (n = 2/group). The healthy intestinal tissue resected for this operation was used as a control. On days 0, 2, 4 and 8, 10-mm perianastomotic segments were resected. Control and perianastomotic segments were examined with an Affymetrix microarray chip to assess changes in gene regulation. Microarray findings were validated using real-time PCR for selected genes. In addition to screening global gene expression, we identified genes intensely regulated during healing and also subjected our data sets to an overrepresentation analysis using the Gene Ontology (GO) and Kyoto Encyclopedia for Genes and Genomes (KEGG). Compared to the control group, we observed that the number of differentially regulated genes peaked on day 2 with a total of 2,238 genes, decreasing by day 4 to 1,687 genes and to 1,407 genes by day 8. PCR validation for matrix metalloproteinases-3 and -13 showed not only identical transcription patterns but also analogous regulation intensity. When setting the cutoff of upregulation at 10-fold to identify genes likely to be relevant, the total gene count was significantly lower with 55, 45 and 37 genes on days 2, 4 and 8, respectively. A total of 947 GO subcategories were significantly overrepresented during anastomotic healing. Furthermore, 23 overrepresented KEGG pathways were identified. This study is the first of its kind that focuses explicitly on gene transcription during intestinal anastomotic healing under standardized conditions. Our work sets a foundation for further studies toward a more profound understanding of the physiology of anastomotic healing.
Discovering semantic features in the literature: a foundation for building functional associations
Chagoyen, Monica; Carmona-Saez, Pedro; Shatkay, Hagit; Carazo, Jose M; Pascual-Montano, Alberto
2006-01-01
Background Experimental techniques such as DNA microarray, serial analysis of gene expression (SAGE) and mass spectrometry proteomics, among others, are generating large amounts of data related to genes and proteins at different levels. As in any other experimental approach, it is necessary to analyze these data in the context of previously known information about the biological entities under study. The literature is a particularly valuable source of information for experiment validation and interpretation. Therefore, the development of automated text mining tools to assist in such interpretation is one of the main challenges in current bioinformatics research. Results We present a method to create literature profiles for large sets of genes or proteins based on common semantic features extracted from a corpus of relevant documents. These profiles can be used to establish pair-wise similarities among genes, utilized in gene/protein classification or can be even combined with experimental measurements. Semantic features can be used by researchers to facilitate the understanding of the commonalities indicated by experimental results. Our approach is based on non-negative matrix factorization (NMF), a machine-learning algorithm for data analysis, capable of identifying local patterns that characterize a subset of the data. The literature is thus used to establish putative relationships among subsets of genes or proteins and to provide coherent justification for this clustering into subsets. We demonstrate the utility of the method by applying it to two independent and vastly different sets of genes. Conclusion The presented method can create literature profiles from documents relevant to sets of genes. The representation of genes as additive linear combinations of semantic features allows for the exploration of functional associations as well as for clustering, suggesting a valuable methodology for the validation and interpretation of high-throughput experimental data. PMID:16438716
Sherrit, Stewart; Masys, Tony J; Wiederick, Harvey D; Mukherjee, Binu K
2011-09-01
We present a procedure for determining the reduced piezoelectric, dielectric, and elastic coefficients for a C(∞) material, including losses, from a single disk sample. Measurements have been made on a Navy III lead zirconate titanate (PZT) ceramic sample and the reduced matrix of coefficients for this material is presented. In addition, we present the transform equations, in reduced matrix form, to other consistent material constant sets. We discuss the propagation of errors in going from one material data set to another and look at the limitations inherent in direct calculations of other useful coefficients from the data.
Binz, Tina M; Braun, Ueli; Baumgartner, Markus R; Kraemer, Thomas
2016-10-15
Hair cortisol levels are increasingly applied as a measure for stress in humans and mammals. Cortisol is an endogenous compound and is always present within the hair matrix. Therefore, "cortisol-free hair matrix" is a critical point for any analytical method to accurately quantify especially low cortisol levels. The aim of this project was to modify current methods used for hair cortisol analysis to more accurately determine low endogenous cortisol concentrations in hair. For that purpose, (13)C3-labeled cortisol, which is not naturally present in hair (above 13C natural abundance levels), was used for calibration and comparative validation applying cortisol versus (13)C3-labeled cortisol. Cortisol was extracted from 20mg hair (standard sample amount) applying an optimized single step extraction protocol. An LC-MS/MS method was developed for the quantitative analysis of cortisol using either cortisol or (13)C3-cortisol as calibrators and D7-cortisone as internal standard (IS). The two methods (cortisol/(13)C3-labeled cortisol) were validated in a concentration range up to 500pg/mg and showed good linearity for both analytes (cortisol: R(2)=0.9995; (13)C3-cortisol R(2)=0.9992). Slight differences were observed for limit of detection (LOD) (0.2pg/mg/0.1pg/mg) and limit of quantification (LOQ) (1pg/mg/0.5pg/mg). Precision was good with a maximum deviation of 8.8% and 10% for cortisol and (13)C3-cortisol respectively. Accuracy and matrix effects were good for both analytes except for the quality control (QC) low cortisol. QC low (2.5pg/mg) showed matrix effects (126.5%, RSD 35.5%) and accuracy showed a deviation of 26% when using cortisol to spike. These effects are likely to be caused by the unknown amount of endogenous cortisol in the different hair samples used to determine validation parameters like matrix effect, LOQ and accuracy. No matrix effects were observed for the high QC (400pg/mg) samples. Recovery was good with 92.7%/87.3% (RSD 9.9%/6.2%) for QC low and 102.3%/82.1% (RSD 5.8%/11.4%) for QC high. After successful validation the applicability of the method could be proven. The study shows that the method is especially useful for determining low endogenous cortisol concentrations as they occur in cow hair for example. Copyright © 2016 Elsevier B.V. All rights reserved.
Use of Taguchi design of experiments to optimize and increase robustness of preliminary designs
NASA Technical Reports Server (NTRS)
Carrasco, Hector R.
1992-01-01
The research performed this summer includes the completion of work begun last summer in support of the Air Launched Personnel Launch System parametric study, providing support on the development of the test matrices for the plume experiments in the Plume Model Investigation Team Project, and aiding in the conceptual design of a lunar habitat. After the conclusion of last years Summer Program, the Systems Definition Branch continued with the Air Launched Personnel Launch System (ALPLS) study by running three experiments defined by L27 Orthogonal Arrays. Although the data was evaluated during the academic year, the analysis of variance and the final project review were completed this summer. The Plume Model Investigation Team (PLUMMIT) was formed by the Engineering Directorate to develop a consensus position on plume impingement loads and to validate plume flowfield models. In order to obtain a large number of individual correlated data sets for model validation, a series of plume experiments was planned. A preliminary 'full factorial' test matrix indicated that 73,024 jet firings would be necessary to obtain all of the information requested. As this was approximately 100 times more firings than the scheduled use of Vacuum Chamber A would permit, considerable effort was needed to reduce the test matrix and optimize it with respect to the specific objectives of the program. Part of the First Lunar Outpost Project deals with Lunar Habitat. Requirements for the habitat include radiation protection, a safe haven for occasional solar flare storms, an airlock module as well as consumables to support 34 extra vehicular activities during a 45 day mission. The objective for the proposed work was to collaborate with the Habitat Team on the development and reusability of the Logistics Modules.
Bodero, Marcia; Bovee, Toine F H; Wang, Si; Hoogenboom, Ron L A P; Klijnstra, Mirjam D; Portier, Liza; Hendriksen, Peter J M; Gerssen, Arjen
2018-02-01
The neuro-2a bioassay is considered as one of the most promising cell-based in vitro bioassays for the broad screening of seafood products for the presence of marine biotoxins. The neuro-2a assay has been shown to detect a wide array of toxins like paralytic shellfish poisons (PSPs), ciguatoxins, and also lipophilic marine biotoxins (LMBs). However, the neuro-2a assay is rarely used for routine testing of samples due to matrix effects that, for example, lead to false positives when testing for LMBs. As a result there are only limited data on validation and evaluation of its performance on real samples. In the present study, the standard extraction procedure for LMBs was adjusted by introducing an additional clean-up step with n-hexane. Recovery losses due to this extra step were less than 10%. This wash step was a crucial addition in order to eliminate false-positive outcomes due to matrix effects. Next, the applicability of this assay was assessed by testing a broad range of shellfish samples contaminated with various LMBs, including diarrhetic shellfish toxins/poisons (DSPs). For comparison, the samples were also analysed by LC-MS/MS. Standards of all regulated LMBs were tested, including analogues of some of these toxins. The neuro-2a cells showed good sensitivity towards all compounds. Extracts of 87 samples, both blank and contaminated with various toxins, were tested. The neuro-2a outcomes were in line with those of LC-MS/MS analysis and support the applicability of this assay for the screening of samples for LMBs. However, for use in a daily routine setting, the test might be further improved and we discuss several recommended modifications which should be considered before a full validation is carried out.
Ansari, Mohammad Azam; Khan, Haris Manzoor; Khan, Aijaz Ahmed; Cameotra, Swaranjit Singh; Saquib, Quaiser; Musarrat, Javed
2014-07-01
Clinical isolates (n = 55) of Pseudomonas aeruginosa were screened for the extended spectrum β-lactamases and metallo-β-lactamases activities and biofilm forming capability. The aim of the study was to demonstrate the antibiofilm efficacy of gum arabic capped-silver nanoparticles (GA-AgNPs) against the multi-drug resistant (MDR) biofilm forming P. aeruginosa. The GA-AgNPs were characterized by UV-spectroscopy, X-ray diffraction, and high resolution-transmission electron microscopy analysis. The isolates were screened for their biofilm forming ability, using the Congo red agar, tube method and tissue culture plate assays. The biofilm forming ability was further validated and its inhibition by GA-AgNPs was demonstrated by performing the scanning electron microscopy (SEM) and confocal laser scanning microscopy. SEM analysis of GA-AgNPs treated bacteria revealed severely deformed and damaged cells. Double fluorescent staining with propidium iodide and concanavalin A-fluorescein isothiocyanate concurrently detected the bacterial cells and exopolysaccharides (EPS) matrix. The CLSM results exhibited the GA-AgNPs concentration dependent inhibition of bacterial growth and EPS matrix of the biofilm colonizers on the surface of plastic catheters. Treatment of catheters with GA-AgNPs at 50 µg ml(-1) has resulted in 95% inhibition of bacterial colonization. This study elucidated the significance of GA-AgNPs, as the next generation antimicrobials, in protection against the biofilm mediated infections caused by MDR P. aeruginosa. It is suggested that application of GA-AgNPs, as a surface coating material for dispensing antibacterial attributes to surgical implants and implements, could be a viable approach for controlling MDR pathogens after adequate validations in clinical settings. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Stubbe, Dirk; De Cremer, Koen; Piérard, Denis; Normand, Anne-Cécile; Piarroux, Renaud; Detandt, Monique; Hendrickx, Marijke
2014-01-01
The rates of infection with Fusarium molds are increasing, and a diverse number of Fusarium spp. belonging to different species complexes can cause infection. Conventional species identification in the clinical laboratory is time-consuming and prone to errors. We therefore evaluated whether matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) is a useful alternative. The 289 Fusarium strains from the Belgian Coordinated Collections of Microorganisms (BCCM)/Institute of Hygiene and Epidemiology Mycology (IHEM) culture collection with validated sequence-based identities and comprising 40 species were used in this study. An identification strategy was developed, applying a standardized MALDI-TOF MS assay and an in-house reference spectrum database. In vitro antifungal testing was performed to assess important differences in susceptibility between clinically relevant species/species complexes. We observed that no incorrect species complex identifications were made by MALDI-TOF MS, and 82.8% of the identifications were correct to the species level. This success rate was increased to 91% by lowering the cutoff for identification. Although the identification of the correct species complex member was not always guaranteed, antifungal susceptibility testing showed that discriminating between Fusarium species complexes can be important for treatment but is not necessarily required between members of a species complex. With this perspective, some Fusarium species complexes with closely related members can be considered as a whole, increasing the success rate of correct identifications to 97%. The application of our user-friendly MALDI-TOF MS identification approach resulted in a dramatic improvement in both time and accuracy compared to identification with the conventional method. A proof of principle of our MALDI-TOF MS approach in the clinical setting using recently isolated Fusarium strains demonstrated its validity. PMID:25411180
Corrected score estimation in the proportional hazards model with misclassified discrete covariates
Zucker, David M.; Spiegelman, Donna
2013-01-01
SUMMARY We consider Cox proportional hazards regression when the covariate vector includes error-prone discrete covariates along with error-free covariates, which may be discrete or continuous. The misclassification in the discrete error-prone covariates is allowed to be of any specified form. Building on the work of Nakamura and his colleagues, we present a corrected score method for this setting. The method can handle all three major study designs (internal validation design, external validation design, and replicate measures design), both functional and structural error models, and time-dependent covariates satisfying a certain ‘localized error’ condition. We derive the asymptotic properties of the method and indicate how to adjust the covariance matrix of the regression coefficient estimates to account for estimation of the misclassification matrix. We present the results of a finite-sample simulation study under Weibull survival with a single binary covariate having known misclassification rates. The performance of the method described here was similar to that of related methods we have examined in previous works. Specifically, our new estimator performed as well as or, in a few cases, better than the full Weibull maximum likelihood estimator. We also present simulation results for our method for the case where the misclassification probabilities are estimated from an external replicate measures study. Our method generally performed well in these simulations. The new estimator has a broader range of applicability than many other estimators proposed in the literature, including those described in our own earlier work, in that it can handle time-dependent covariates with an arbitrary misclassification structure. We illustrate the method on data from a study of the relationship between dietary calcium intake and distal colon cancer. PMID:18219700
Continuous-Time Bilinear System Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan
2003-01-01
The objective of this paper is to describe a new method for identification of a continuous-time multi-input and multi-output bilinear system. The approach is to make judicious use of the linear-model properties of the bilinear system when subjected to a constant input. Two steps are required in the identification process. The first step is to use a set of pulse responses resulting from a constant input of one sample period to identify the state matrix, the output matrix, and the direct transmission matrix. The second step is to use another set of pulse responses with the same constant input over multiple sample periods to identify the input matrix and the coefficient matrices associated with the coupling terms between the state and the inputs. Numerical examples are given to illustrate the concept and the computational algorithm for the identification method.
NASA Astrophysics Data System (ADS)
Pezelier, Baptiste
2018-02-01
In this proceeding, we recall the notion of quantum integrable systems on a lattice and then introduce the Sklyanin’s Separation of Variables method. We sum up the main results for the transfer matrix spectral problem for the cyclic representations of the trigonometric 6-vertex reflection algebra associated to the Bazanov-Stroganov Lax operator. These results apply as well to the spectral analysis of the lattice sine-Gordon model with open boundary conditions. The transfer matrix spectrum (both eigenvalues and eigenstates) is completely characterized in terms of the set of solutions to a discrete system of polynomial equations. We state an equivalent characterization as the set of solutions to a Baxter’s like T-Q functional equation, allowing us to rewrite the transfer matrix eigenstates in an algebraic Bethe ansatz form.
A Revised Set of Dendroclimatic Reconstructions of Summer Drought over the Conterminous U.S.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Mann, M. E.; Cook, E. R.
2002-12-01
We describe a revised set of dendroclimatic reconstructions of drought patterns over the conterminous U.S back to 1700. These reconstructions are based on a set of 483 drought-sensitive tree ring chronologies available across the continental U.S. used previously by Cook et al [Cook, E.R., D.M. Meko, D.W. Stahle, and M.K. Cleaveland, Drought Reconstructions for the Continental United States, Journal of Climate, 12, 1145-1162, 1999]. In contrast with the "Point by Point" (PPR) local regression technique used by Cook et al (1999), the tree ring data were calibrated against the instrumental record of summer drought[June-August Palmer Drought Severity Index (PDSI)] based on application of the "Regularized Expectation Maximization" (RegEM) algorithm to relate proxy and instrumental data over a common (20th century) interval. This approach calibrates the proxy data set against the instrumental record by treating the reconstruction as initially missing data in the combined proxy/instrumental data matrix, and optimally estimating the mean and covariances of the combined data matrix through an iterative procedure which yields a reconstruction of the PDSI field with minimal error variance [Schneider, T., Analysis of Incomplete Climate Data: Estimation of Mean Values and Covariance Matrices and Imputation of Missing Values, Journal of Climate, 14, 853-871, 2001; Mann, M.E., Rutherford, S., Climate Reconstruction Using 'Pseudoproxies', Geophysical Research Letters, 29, 139-1-139-4, 2002; Rutherford, S., Mann, M.E., Delworth, T.L., Stouffer, R., The Performance of Covariance-Based Methods of Climate Field Reconstruction Under Stationary and Nonstationary Forcing, J. Climate, accepted, 2002]. As in Cook et al (1999), a screening procedure was first used to select an optimal subset of candidate tree-ring drought predictors, and the predictors (tree ring data) and predictand (instrumental PDSI) were pre-whitened prior to calibration (with serial correlation added back into the reconstruction at the end of the procedure). The PDSI field was separated into 8 relatively homogenous regions of summer drought though a cluster analysis, and three distinct calibration schemes were investigated: (i) 'global' (i.e., entire conterminous U.S. domain) proxy data calibrated against 'global' PDSI; (ii) regional proxy data calibrated against regional PDSI, and (iii) global proxy data calibrated against regional PDSI. The greatest cross-validated skill was evident for case (iii), suggesting the existence of useful non-local information in the tree ring predictor set. The resulting reconstructions of drought were compared against the previous reconstructions of Cook et al (1999) back to 1700, with very similar results found for the domain mean and regional mean time series. Cross-validation results based on withheld late 19th/early 20th century instrumental data [and a regionally-limited extension of cross-validation results back to mid 19th century based on long available instrumental series] both suggest a modest improvement in reconstructive skill over the PPR approach. Differences at the regional scale are evident for particular years and for decadal drought episodes. At the continental scale, the 1930s "Dust Bowl" remains the most severe drought event since 1700 within the context of the estimated uncertainties, but more severe episodes may have occurred at regional scales in past centuries.
Pai, Priyadarshini P; Mondal, Sukanta
2016-10-01
Proteins interact with carbohydrates to perform various cellular interactions. Of the many carbohydrate ligands that proteins bind with, mannose constitute an important class, playing important roles in host defense mechanisms. Accurate identification of mannose-interacting residues (MIR) may provide important clues to decipher the underlying mechanisms of protein-mannose interactions during infections. This study proposes an approach using an ensemble of base classifiers for prediction of MIR using their evolutionary information in the form of position-specific scoring matrix. The base classifiers are random forests trained by different subsets of training data set Dset128 using 10-fold cross-validation. The optimized ensemble of base classifiers, MOWGLI, is then used to predict MIR on protein chains of the test data set Dtestset29 which showed a promising performance with 92.0% accurate prediction. An overall improvement of 26.6% in precision was observed upon comparison with the state-of-art. It is hoped that this approach, yielding enhanced predictions, could be eventually used for applications in drug design and vaccine development.
Engineering of layered, lipid-encapsulated drug nanoparticles through spray-drying.
Sapra, Mahak; Mayya, Y S; Venkataraman, Chandra
2017-06-01
Drug-containing nanoparticles have been synthesized through the spray-drying of submicron droplet aerosols by using matrix materials such as lipids and biopolymers. Understanding layer formation in composite nanoparticles is essential for the appropriate engineering of particle substructures. The present study developed a droplet-shrinkage model for predicting the solid-phase formation of two non-volatile solutes-stearic acid lipid and a set of drugs, by considering molecular volume and solubility. Nanoparticle formation was simulated to define the parameter space of material properties and process conditions for the formation of a layered structure with the preferential accumulation of the lipid in the outer layer. Moreover, lipid-drug demarcation diagrams representing a set of critical values of ratios of solute properties at which the two solutes precipitate simultaneously were developed. The model was validated through the preparation of stearic acid-isoniazid nanoparticles under controlled processing conditions. The developed model can guide the selection of solvents, lipids, and processing conditions such that drug loading and lipid encapsulation in composite nanoparticles are optimized. Copyright © 2017 Elsevier B.V. All rights reserved.
Bayesian Factor Analysis When Only a Sample Covariance Matrix Is Available
ERIC Educational Resources Information Center
Hayashi, Kentaro; Arav, Marina
2006-01-01
In traditional factor analysis, the variance-covariance matrix or the correlation matrix has often been a form of inputting data. In contrast, in Bayesian factor analysis, the entire data set is typically required to compute the posterior estimates, such as Bayes factor loadings and Bayes unique variances. We propose a simple method for computing…
Mazzotti, M; Bartoli, I; Castellazzi, G; Marzani, A
2014-09-01
The paper aims at validating a recently proposed Semi Analytical Finite Element (SAFE) formulation coupled with a 2.5D Boundary Element Method (2.5D BEM) for the extraction of dispersion data in immersed waveguides of generic cross-section. To this end, three-dimensional vibroacoustic analyses are carried out on two waveguides of square and rectangular cross-section immersed in water using the commercial Finite Element software Abaqus/Explicit. Real wavenumber and attenuation dispersive data are extracted by means of a modified Matrix Pencil Method. It is demonstrated that the results obtained using the two techniques are in very good agreement. Copyright © 2014 Elsevier B.V. All rights reserved.
The Construct Validation of a Questionnaire of Social and Cultural Capital
ERIC Educational Resources Information Center
Pishghadam, Reza; Noghani, Mohsen; Zabihi, Reza
2011-01-01
The present study was conducted to construct and validate a questionnaire of social and cultural capital in the foreign language context of Iran. To this end, a questionnaire was designed by picking up the most frequently-used indicators of social and cultural capital. The Factorability of the intercorrelation matrix was measured by two tests:…
Extracting physicochemical features to predict protein secondary structure.
Huang, Yin-Fu; Chen, Shu-Ying
2013-01-01
We propose a protein secondary structure prediction method based on position-specific scoring matrix (PSSM) profiles and four physicochemical features including conformation parameters, net charges, hydrophobic, and side chain mass. First, the SVM with the optimal window size and the optimal parameters of the kernel function is found. Then, we train the SVM using the PSSM profiles generated from PSI-BLAST and the physicochemical features extracted from the CB513 data set. Finally, we use the filter to refine the predicted results from the trained SVM. For all the performance measures of our method, Q 3 reaches 79.52, SOV94 reaches 86.10, and SOV99 reaches 74.60; all the measures are higher than those of the SVMpsi method and the SVMfreq method. This validates that considering these physicochemical features in predicting protein secondary structure would exhibit better performances.
Extracting Physicochemical Features to Predict Protein Secondary Structure
Chen, Shu-Ying
2013-01-01
We propose a protein secondary structure prediction method based on position-specific scoring matrix (PSSM) profiles and four physicochemical features including conformation parameters, net charges, hydrophobic, and side chain mass. First, the SVM with the optimal window size and the optimal parameters of the kernel function is found. Then, we train the SVM using the PSSM profiles generated from PSI-BLAST and the physicochemical features extracted from the CB513 data set. Finally, we use the filter to refine the predicted results from the trained SVM. For all the performance measures of our method, Q 3 reaches 79.52, SOV94 reaches 86.10, and SOV99 reaches 74.60; all the measures are higher than those of the SVMpsi method and the SVMfreq method. This validates that considering these physicochemical features in predicting protein secondary structure would exhibit better performances. PMID:23766688
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2012-01-01
A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.
Mackay, Michael M
2016-09-01
This article offers a correlation matrix of meta-analytic estimates between various employee job attitudes (i.e., Employee engagement, job satisfaction, job involvement, and organizational commitment) and indicators of employee effectiveness (i.e., Focal performance, contextual performance, turnover intention, and absenteeism). The meta-analytic correlations in the matrix are based on over 1100 individual studies representing over 340,000 employees. Data was collected worldwide via employee self-report surveys. Structural path analyses based on the matrix, and the interpretation of the data, can be found in "Investigating the incremental validity of employee engagement in the prediction of employee effectiveness: a meta-analytic path analysis" (Mackay et al., 2016) [1].
Matrix cracking in laminated composites under monotonic and cyclic loadings
NASA Technical Reports Server (NTRS)
Allen, David H.; Lee, Jong-Won
1991-01-01
An analytical model based on the internal state variable (ISV) concept and the strain energy method is proposed for characterizing the monotonic and cyclic response of laminated composites containing matrix cracks. A modified constitution is formulated for angle-ply laminates under general in-plane mechanical loading and constant temperature change. A monotonic matrix cracking criterion is developed for predicting the crack density in cross-ply laminates as a function of the applied laminate axial stress. An initial formulation for a cyclic matrix cracking criterion for cross-ply laminates is also discussed. For the monotonic loading case, a number of experimental data and well-known models are compared with the present study for validating the practical applicability of the ISV approach.
MRL and SuperFine+MRL: new supertree methods
2012-01-01
Background Supertree methods combine trees on subsets of the full taxon set together to produce a tree on the entire set of taxa. Of the many supertree methods, the most popular is MRP (Matrix Representation with Parsimony), a method that operates by first encoding the input set of source trees by a large matrix (the "MRP matrix") over {0,1, ?}, and then running maximum parsimony heuristics on the MRP matrix. Experimental studies evaluating MRP in comparison to other supertree methods have established that for large datasets, MRP generally produces trees of equal or greater accuracy than other methods, and can run on larger datasets. A recent development in supertree methods is SuperFine+MRP, a method that combines MRP with a divide-and-conquer approach, and produces more accurate trees in less time than MRP. In this paper we consider a new approach for supertree estimation, called MRL (Matrix Representation with Likelihood). MRL begins with the same MRP matrix, but then analyzes the MRP matrix using heuristics (such as RAxML) for 2-state Maximum Likelihood. Results We compared MRP and SuperFine+MRP with MRL and SuperFine+MRL on simulated and biological datasets. We examined the MRP and MRL scores of each method on a wide range of datasets, as well as the resulting topological accuracy of the trees. Our experimental results show that MRL, coupled with a very good ML heuristic such as RAxML, produced more accurate trees than MRP, and MRL scores were more strongly correlated with topological accuracy than MRP scores. Conclusions SuperFine+MRP, when based upon a good MP heuristic, such as TNT, produces among the best scores for both MRP and MRL, and is generally faster and more topologically accurate than other supertree methods we tested. PMID:22280525
NASA Astrophysics Data System (ADS)
Navarro Pérez, R.; Schunck, N.; Dyhdalo, A.; Furnstahl, R. J.; Bogner, S. K.
2018-05-01
Background: Energy density functional methods provide a generic framework to compute properties of atomic nuclei starting from models of nuclear potentials and the rules of quantum mechanics. Until now, the overwhelming majority of functionals have been constructed either from empirical nuclear potentials such as the Skyrme or Gogny forces, or from systematic gradient-like expansions in the spirit of the density functional theory for atoms. Purpose: We seek to obtain a usable form of the nuclear energy density functional that is rooted in the modern theory of nuclear forces. We thus consider a functional obtained from the density matrix expansion of local nuclear potentials from chiral effective field theory. We propose a parametrization of this functional carefully calibrated and validated on selected ground-state properties that is suitable for large-scale calculations of nuclear properties. Methods: Our energy functional comprises two main components. The first component is a non-local functional of the density and corresponds to the direct part (Hartree term) of the expectation value of local chiral potentials on a Slater determinant. Contributions to the mean field and the energy of this term are computed by expanding the spatial, finite-range components of the chiral potential onto Gaussian functions. The second component is a local functional of the density and is obtained by applying the density matrix expansion to the exchange part (Fock term) of the expectation value of the local chiral potential. We apply the UNEDF2 optimization protocol to determine the coupling constants of this energy functional. Results: We obtain a set of microscopically constrained functionals for local chiral potentials from leading order up to next-to-next-to-leading order with and without three-body forces and contributions from Δ excitations. These functionals are validated on the calculation of nuclear and neutron matter, nuclear mass tables, single-particle shell structure in closed-shell nuclei, and the fission barrier of 240Pu. Quantitatively, they perform noticeably better than the more phenomenological Skyrme functionals. Conclusions: The inclusion of higher-order terms in the chiral perturbation expansion seems to produce a systematic improvement in predicting nuclear binding energies while the impact on other observables is not really significant. This result is especially promising since all the fits have been performed at the single-reference level of the energy density functional approach, where important collective correlations such as center-of-mass correction, rotational correction, or zero-point vibrational energies have not been taken into account yet.
Assessing Discriminative Performance at External Validation of Clinical Prediction Models
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.
2016-01-01
Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753
Assessing Discriminative Performance at External Validation of Clinical Prediction Models.
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W
2016-01-01
External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.
IRAC Full-Scale Flight Testbed Capabilities
NASA Technical Reports Server (NTRS)
Lee, James A.; Pahle, Joseph; Cogan, Bruce R.; Hanson, Curtis E.; Bosworth, John T.
2009-01-01
Overview: Provide validation of adaptive control law concepts through full scale flight evaluation in a representative avionics architecture. Develop an understanding of aircraft dynamics of current vehicles in damaged and upset conditions Real-world conditions include: a) Turbulence, sensor noise, feedback biases; and b) Coupling between pilot and adaptive system. Simulated damage includes 1) "B" matrix (surface) failures; and 2) "A" matrix failures. Evaluate robustness of control systems to anticipated and unanticipated failures.
Tranpsort phenomena in solidification processing of functionally graded materials
NASA Astrophysics Data System (ADS)
Gao, Juwen
A combined numerical and experimental study of the transport phenomena during solidification processing of metal matrix composite functionally graded materials (FGMs) is conducted in this work. A multiphase transport model for the solidification of metal-matrix composite FGMs has been developed that accounts for macroscopic particle segregation due to liquid-particle flow and particle-solid interactions. An experimental study has also been conducted to gain physical insight as well as to validate the model. A novel method to in-situ measure the particle volume fraction using fiber optic probes is developed for transparent analogue solidification systems. The model is first applied to one-dimensional pure matrix FGM solidification under gravity or centrifugal field and is extensively validated against the experimental results. The mechanisms for the formation of particle concentration gradient are identified. Two-dimensional solidification of pure matrix FGM with convection is then studied using the model as well as experiments. The interaction among convection flow, solidification process and the particle transport is demonstrated. The results show the importance of convection in the particle concentration gradient formation. Then, simulations for alloy FGM solidification are carried out for unidirectional solidification as well as two-dimensional solidification with convection. The interplay among heat and species transport, convection and particle motion is investigated. Finally, future theoretical and experimental work is outlined.
An Uncertainty Structure Matrix for Models and Simulations
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Blattnig, Steve R.; Hemsch, Michael J.; Luckring, James M.; Tripathi, Ram K.
2008-01-01
Software that is used for aerospace flight control and to display information to pilots and crew is expected to be correct and credible at all times. This type of software is typically developed under strict management processes, which are intended to reduce defects in the software product. However, modeling and simulation (M&S) software may exhibit varying degrees of correctness and credibility, depending on a large and complex set of factors. These factors include its intended use, the known physics and numerical approximations within the M&S, and the referent data set against which the M&S correctness is compared. The correctness and credibility of an M&S effort is closely correlated to the uncertainty management (UM) practices that are applied to the M&S effort. This paper describes an uncertainty structure matrix for M&S, which provides a set of objective descriptions for the possible states of UM practices within a given M&S effort. The columns in the uncertainty structure matrix contain UM elements or practices that are common across most M&S efforts, and the rows describe the potential levels of achievement in each of the elements. A practitioner can quickly look at the matrix to determine where an M&S effort falls based on a common set of UM practices that are described in absolute terms that can be applied to virtually any M&S effort. The matrix can also be used to plan those steps and resources that would be needed to improve the UM practices for a given M&S effort.
Robust and intelligent bearing estimation
Claassen, John P.
2000-01-01
A method of bearing estimation comprising quadrature digital filtering of event observations, constructing a plurality of observation matrices each centered on a time-frequency interval, determining for each observation matrix a parameter such as degree of polarization, linearity of particle motion, degree of dyadicy, or signal-to-noise ratio, choosing observation matrices most likely to produce a set of best available bearing estimates, and estimating a bearing for each observation matrix of the chosen set.
Xie, Dan; Li, Ao; Wang, Minghui; Fan, Zhewen; Feng, Huanqing
2005-01-01
Subcellular location of a protein is one of the key functional characters as proteins must be localized correctly at the subcellular level to have normal biological function. In this paper, a novel method named LOCSVMPSI has been introduced, which is based on the support vector machine (SVM) and the position-specific scoring matrix generated from profiles of PSI-BLAST. With a jackknife test on the RH2427 data set, LOCSVMPSI achieved a high overall prediction accuracy of 90.2%, which is higher than the prediction results by SubLoc and ESLpred on this data set. In addition, prediction performance of LOCSVMPSI was evaluated with 5-fold cross validation test on the PK7579 data set and the prediction results were consistently better than the previous method based on several SVMs using composition of both amino acids and amino acid pairs. Further test on the SWISSPROT new-unique data set showed that LOCSVMPSI also performed better than some widely used prediction methods, such as PSORTII, TargetP and LOCnet. All these results indicate that LOCSVMPSI is a powerful tool for the prediction of eukaryotic protein subcellular localization. An online web server (current version is 1.3) based on this method has been developed and is freely available to both academic and commercial users, which can be accessed by at . PMID:15980436
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Hopkins, D. A.
1985-01-01
A set of thermoviscoplastic nonlinear constitutive relationships (1VP-NCR) is presented. The set was developed for application to high temperature metal matrix composites (HT-MMC) and is applicable to thermal and mechanical properties. Formulation of the TVP-NCR is based at the micromechanics level. The TVP-NCR are of simple form and readily integrated into nonlinear composite structural analysis. It is shown that the set of TVP-NCR is computationally effective. The set directly predicts complex materials behavior at all levels of the composite simulation, from the constituent materials, through the several levels of composite mechanics, and up to the global response of complex HT-MMC structural components.
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
System for solving diagnosis and hitting set problems
NASA Technical Reports Server (NTRS)
Vatan, Farrokh (Inventor); Fijany, Amir (Inventor)
2007-01-01
The diagnosis problem arises when a system's actual behavior contradicts the expected behavior, thereby exhibiting symptoms (a collection of conflict sets). System diagnosis is then the task of identifying faulty components that are responsible for anomalous behavior. To solve the diagnosis problem, the present invention describes a method for finding the minimal set of faulty components (minimal diagnosis set) that explain the conflict sets. The method includes acts of creating a matrix of the collection of conflict sets, and then creating nodes from the matrix such that each node is a node in a search tree. A determination is made as to whether each node is a leaf node or has any children nodes. If any given node has children nodes, then the node is split until all nodes are leaf nodes. Information gathered from the leaf nodes is used to determine the minimal diagnosis set.
Reliability, validity and feasibility of nail ultrasonography in psoriatic arthritis.
Arbault, Anaïs; Devilliers, Hervé; Laroche, Davy; Cayot, Audrey; Vabres, Pierre; Maillefert, Jean-Francis; Ornetti, Paul
2016-10-01
To determine the feasibility, reliability and validity of nails ultrasonography in psoriatic arthritis as an outcome measure. Pilot prospective single-centre study of eight ultrasonography parameters in B mode and power Doppler concerning the distal interphalangeal (DIP) joint, the matrix, the bed and nail plate. Intra-observer and inter-observer reliability was evaluated for the seven quantitative parameters (ICC and kappa). Correlations between ultrasonography and clinical variables were searched to assess external validity. Feasibility was assessed by the time to carry out the examination and the percentage of missing data. Twenty-seven patients with psoriatic arthritis (age 55.0±16.2 years, disease duration 13.4±9.4 years) were included. Of these, 67% presented nail involvement on ultrasonography vs 37% on physical examination (P<0.05). Reliability was good (ICC and weighted kappa>0.75) for the seven quantitative parameters, except for synovitis of the DIP joint in B mode. The synovitis of the DIP joint revealed by ultrasonography correlated with the total number of clinical synovitis and Doppler US of the nail (matrix and bed). Doppler US of the matrix correlated with VAS pain but not with the ASDAS-CRP or with clinical enthesitis. No significant correlation was found with US nail thickness. The feasibility and reliability of ultrasonography of the nail in psoriatic arthritis appear to be satisfactory. Among the eight parameters evaluated, power Doppler of the matrix which correlated with local inflammation (DIP joint and bed) and with VAS pain could become an interesting outcome measure, provided that it is also sensitive to change. Copyright © 2015 Société française de rhumatologie. Published by Elsevier SAS. All rights reserved.
Evans, Douglas W; Rajagopalan, Padma; Devita, Raffaella; Sparks, Jessica L
2011-01-01
Liver sinusoidal endothelial cells (LSECs) are the primary site of numerous transport and exchange processes essential for liver function. LSECs rest on a sparse extracellular matrix layer housed in the space of Disse, a 0.5-1LSECs from hepatocytes. To develop bioengineered liver tissue constructs, it is important to understand the mechanical interactions among LSECs, hepatocytes, and the extracellular matrix in the space of Disse. Currently the mechanical properties of space of Disse matrix are not well understood. The objective of this study was to develop and validate a device for performing mechanical tests at the meso-scale (100nm-100m), to enable novel matrix characterization within the space of Disse. The device utilizes a glass micro-spherical indentor attached to a cantilever made from a fiber optic cable. The 3-axis translation table used to bring the specimen in contact with the indentor and deform the cantilever. A position detector monitors the location of a laser passing through the cantilever and allows for the calculation of subsequent tissue deformation. The design allows micro-newton and nano-newton stress-strain tissue behavior to be quantified. To validate the device accuracy, 11 samples of silicon rubber in two formulations were tested to experimentally confirm their Young's moduli. Prior macroscopic unconfined compression tests determined the formulations of EcoFlex030 (n-6) and EcoFlex010 (n-5) to posses Young's moduli of 92.67+-6.22 and 43.10+-3.29 kPa respectively. Optical measurements taken utilizing CITE's position control and fiber optic cantilever found the moduli to be 106.4 kPa and 47.82 kPa.
Restricted Closed Shell Hartree Fock Roothaan Matrix Method Applied to Helium Atom Using Mathematica
ERIC Educational Resources Information Center
Acosta, César R.; Tapia, J. Alejandro; Cab, César
2014-01-01
Slater type orbitals were used to construct the overlap and the Hamiltonian core matrices; we also found the values of the bi-electron repulsion integrals. The Hartree Fock Roothaan approximation process starts with setting an initial guess value for the elements of the density matrix; with these matrices we constructed the initial Fock matrix.…
Penny-shaped crack in a fiber-reinforced matrix. [elastostatics
NASA Technical Reports Server (NTRS)
Narayanan, T. V.; Erdogan, F.
1974-01-01
Using a slender inclusion model developed earlier, the elastostatic interaction problem between a penny-shaped crack and elastic fibers in an elastic matrix is formulated. For a single set and for multiple sets of fibers oriented perpendicularly to the plane of the crack and distributed symmetrically on concentric circles, the problem was reduced to a system of singular integral equations. Techniques for the regularization and for the numerical solution of the system are outlined. For various fiber geometries numerical examples are given, and distribution of the stress intensity factor along the crack border was obtained. Sample results showing the distribution of the fiber stress and a measure of the fiber-matrix interface shear are also included.
Penny-shaped crack in a fiber-reinforced matrix
NASA Technical Reports Server (NTRS)
Narayanan, T. V.; Erdogan, F.
1975-01-01
Using the slender inclusion model developed earlier the elastostatic interaction problem between a penny-shaped crack and elastic fibers in an elastic matrix is formulated. For a single set and for multiple sets of fibers oriented perpendicularly to the plane of the crack and distributed symmetrically on concentric circles the problem is reduced to a system of singular integral equations. Techniques for the regularization and for the numerical solution of the system are outlined. For various fiber geometries numerical examples are given and distribution of the stress intensity factor along the crack border is obtained. Sample results showing the distribution of the fiber stress and a measure of the fiber-matrix interface shear are also included.
Distance learning in discriminative vector quantization.
Schneider, Petra; Biehl, Michael; Hammer, Barbara
2009-10-01
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.
Methodology for extracting local constants from petroleum cracking flows
Chang, Shen-Lin; Lottes, Steven A.; Zhou, Chenn Q.
2000-01-01
A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.
Finding imaging patterns of structural covariance via Non-Negative Matrix Factorization.
Sotiras, Aristeidis; Resnick, Susan M; Davatzikos, Christos
2015-03-01
In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. Copyright © 2014 Elsevier Inc. All rights reserved.
Spatial and thematic assessment of object-based forest stand delineation using an OFA-matrix
NASA Astrophysics Data System (ADS)
Hernando, A.; Tiede, D.; Albrecht, F.; Lang, S.
2012-10-01
The delineation and classification of forest stands is a crucial aspect of forest management. Object-based image analysis (OBIA) can be used to produce detailed maps of forest stands from either orthophotos or very high resolution satellite imagery. However, measures are then required for evaluating and quantifying both the spatial and thematic accuracy of the OBIA output. In this paper we present an approach for delineating forest stands and a new Object Fate Analysis (OFA) matrix for accuracy assessment. A two-level object-based orthophoto analysis was first carried out to delineate stands on the Dehesa Boyal public land in central Spain (Avila Province). Two structural features were first created for use in class modelling, enabling good differentiation between stands: a relational tree cover cluster feature, and an arithmetic ratio shadow/tree feature. We then extended the OFA comparison approach with an OFA-matrix to enable concurrent validation of thematic and spatial accuracies. Its diagonal shows the proportion of spatial and thematic coincidence between a reference data and the corresponding classification. New parameters for Spatial Thematic Loyalty (STL), Spatial Thematic Loyalty Overall (STLOVERALL) and Maximal Interfering Object (MIO) are introduced to summarise the OFA-matrix accuracy assessment. A stands map generated by OBIA (classification data) was compared with a map of the same area produced from photo interpretation and field data (reference data). In our example the OFA-matrix results indicate good spatial and thematic accuracies (>65%) for all stand classes except for the shrub stands (31.8%), and a good STLOVERALL (69.8%). The OFA-matrix has therefore been shown to be a valid tool for OBIA accuracy assessment.
NASA Astrophysics Data System (ADS)
Gillam, Thomas P. S.; Lester, Christopher G.
2014-11-01
We consider current and alternative approaches to setting limits on new physics signals having backgrounds from misidentified objects; for example jets misidentified as leptons, b-jets or photons. Many ATLAS and CMS analyses have used a heuristic "matrix method" for estimating the background contribution from such sources. We demonstrate that the matrix method suffers from statistical shortcomings that can adversely affect its ability to set robust limits. A rigorous alternative method is discussed, and is seen to produce fake rate estimates and limits with better qualities, but is found to be too costly to use. Having investigated the nature of the approximations used to derive the matrix method, we propose a third strategy that is seen to marry the speed of the matrix method to the performance and physicality of the more rigorous approach.
Local matrix learning in clustering and applications for manifold visualization.
Arnonkijpanich, Banchar; Hasenfuss, Alexander; Hammer, Barbara
2010-05-01
Electronic data sets are increasing rapidly with respect to both, size of the data sets and data resolution, i.e. dimensionality, such that adequate data inspection and data visualization have become central issues of data mining. In this article, we present an extension of classical clustering schemes by local matrix adaptation, which allows a better representation of data by means of clusters with an arbitrary spherical shape. Unlike previous proposals, the method is derived from a global cost function. The focus of this article is to demonstrate the applicability of this matrix clustering scheme to low-dimensional data embedding for data inspection. The proposed method is based on matrix learning for neural gas and manifold charting. This provides an explicit mapping of a given high-dimensional data space to low dimensionality. We demonstrate the usefulness of this method for data inspection and manifold visualization. 2009 Elsevier Ltd. All rights reserved.
A rough set approach for determining weights of decision makers in group decision making.
Yang, Qiang; Du, Ping-An; Wang, Yong; Liang, Bin
2017-01-01
This study aims to present a novel approach for determining the weights of decision makers (DMs) based on rough group decision in multiple attribute group decision-making (MAGDM) problems. First, we construct a rough group decision matrix from all DMs' decision matrixes on the basis of rough set theory. After that, we derive a positive ideal solution (PIS) founded on the average matrix of rough group decision, and negative ideal solutions (NISs) founded on the lower and upper limit matrixes of rough group decision. Then, we obtain the weight of each group member and priority order of alternatives by using relative closeness method, which depends on the distances from each individual group member' decision to the PIS and NISs. Through comparisons with existing methods and an on-line business manager selection example, the proposed method show that it can provide more insights into the subjectivity and vagueness of DMs' evaluations and selections.
Exploring and Making Sense of Large Graphs
2015-08-01
and bold) are n × n ; vectors (lower-case bold) are n × 1 column vectors, and scalars (in lower-case plain font) typically correspond to strength of...graph is often denoted as |V| or n . Edges or Links: A finite set E of lines between objects in a graph. The edges represent relationships between the...Adjacency matrix of a simple, unweighted and undirected graph. Adjacency matrix: The adjacency matrix of a graph G is an n × n matrix A, whose element aij
A satellite relative motion model including J_2 and J_3 via Vinti's intermediary
NASA Astrophysics Data System (ADS)
Biria, Ashley D.; Russell, Ryan P.
2018-03-01
Vinti's potential is revisited for analytical propagation of the main satellite problem, this time in the context of relative motion. A particular version of Vinti's spheroidal method is chosen that is valid for arbitrary elliptical orbits, encapsulating J_2, J_3, and generally a partial J_4 in an orbit propagation theory without recourse to perturbation methods. As a child of Vinti's solution, the proposed relative motion model inherits these properties. Furthermore, the problem is solved in oblate spheroidal elements, leading to large regions of validity for the linearization approximation. After offering several enhancements to Vinti's solution, including boosts in accuracy and removal of some singularities, the proposed model is derived and subsequently reformulated so that Vinti's solution is piecewise differentiable. While the model is valid for the critical inclination and nonsingular in the element space, singularities remain in the linear transformation from Earth-centered inertial coordinates to spheroidal elements when the eccentricity is zero or for nearly equatorial orbits. The new state transition matrix is evaluated against numerical solutions including the J_2 through J_5 terms for a wide range of chief orbits and separation distances. The solution is also compared with side-by-side simulations of the original Gim-Alfriend state transition matrix, which considers the J_2 perturbation. Code for computing the resulting state transition matrix and associated reference frame and coordinate transformations is provided online as supplementary material.
Neuroanatomy-based matrix-guided trimming protocol for the rat brain.
Defazio, Rossella; Criado, Ana; Zantedeschi, Valentina; Scanziani, Eugenio
2015-02-01
Brain trimming through defined neuroanatomical landmarks is recommended to obtain consistent sections in rat toxicity studies. In this article, we describe a matrix-guided trimming protocol that uses channels to reproduce coronal levels of anatomical landmarks. Both setup phase and validation study were performed on Han Wistar male rats (Crl:WI(Han)), 10-week-old, with bodyweight of 298 ± 29 (SD) g, using a matrix (ASI-Instruments(®), Houston, TX) fitted for brains of rats with 200 to 400 g bodyweight. In the setup phase, we identified eight channels, that is, 6, 8, 10, 12, 14, 16, 19, and 21, matching the recommended landmarks midway to the optic chiasm, frontal pole, optic chiasm, infundibulum, mamillary bodies, midbrain, middle cerebellum, and posterior cerebellum, respectively. In the validation study, we trimmed the immersion-fixed brains of 60 rats using the selected channels to determine how consistently the channels reproduced anatomical landmarks. Percentage of success (i.e., presence of expected targets for each level) ranged from 89 to 100%. Where 100% success was not achieved, it was noted that the shift in brain trimming was toward the caudal pole. In conclusion, we developed and validated a trimming protocol for the rat brain that allow comparable extensiveness, homology, and relevance of coronal sections as the landmark-guided trimming with the advantage of being quickly learned by technicians. © 2014 by The Author(s).
Due diligence in the characterization of matrix effects in a total IL-13 Singulex™ method.
Fraser, Stephanie; Soderstrom, Catherine
2014-04-01
After obtaining her PhD in Cellular and Molecular biology from the University of Nevada, Reno, Stephanie has spent the last 15 years in the field of bioanalysis. She has held positions in academia, biotech, contract research and large pharma where she has managed ligand binding assay (discovery to Phase IIb clinical) and flow cytometry (preclinical) laboratories as well as taken the lead on implementing new/emergent technologies. Currently Stephanie leads Pfizer's Regulated Bioanalysis Ligand Binding Assay group, focusing on early clinical biomarker support. Interleukin (IL)-13, a Th2 cytokine, drives a range of physiological responses associated with the induction of allergic airway diseases and inflammatory bowel diseases. Analysis of IL-13 as a biomarker has provided insight into its role in disease mechanisms and progression. Serum IL-13 concentrations are often too low to be measured by standard enzyme-linked immunosorbent assay techniques, necessitating the implementation of a highly sensitive assay. Previously, the validation of a Singulex™ Erenna(®) assay for the quantitation of IL-13 was reported. Herein we describe refinement of this validation; defining the impact of matrix interference on the lower limit of quantification, adding spiked matrix QC samples, and extending endogenous IL-13 stability. A fit-for-purpose validation was conducted and the assay was used to support a Phase II clinical trial.
NASA Astrophysics Data System (ADS)
Yang, Jianwen
2012-04-01
A general analytical solution is derived by using the Laplace transformation to describe transient reactive silica transport in a conceptualized 2-D system involving a set of parallel fractures embedded in an impermeable host rock matrix, taking into account of hydrodynamic dispersion and advection of silica transport along the fractures, molecular diffusion from each fracture to the intervening rock matrix, and dissolution of quartz. A special analytical solution is also developed by ignoring the longitudinal hydrodynamic dispersion term but remaining other conditions the same. The general and special solutions are in the form of a double infinite integral and a single infinite integral, respectively, and can be evaluated using Gauss-Legendre quadrature technique. A simple criterion is developed to determine under what conditions the general analytical solution can be approximated by the special analytical solution. It is proved analytically that the general solution always lags behind the special solution, unless a dimensionless parameter is less than a critical value. Several illustrative calculations are undertaken to demonstrate the effect of fracture spacing, fracture aperture and fluid flow rate on silica transport. The analytical solutions developed here can serve as a benchmark to validate numerical models that simulate reactive mass transport in fractured porous media.
Factors modulating social influence on spatial choice in rats.
Bisbing, Teagan A; Saxon, Marie; Sayde, Justin M; Brown, Michael F
2015-07-01
Three experiments examined the conditions under which the spatial choices of rats searching for food are influenced by the choices made by other rats. Model rats learned a consistent set of baited locations in a 5 × 5 matrix of locations, some of which contained food. In Experiment 1, subject rats could determine the baited locations after choosing 1 location because all of the baited locations were on the same side of the matrix during each trial (the baited side varied over trials). Under these conditions, the social cues provided by the model rats had little or no effect on the choices made by the subject rats. The lack of social influence on choices occurred despite a simultaneous social influence on rats' location in the testing arena (Experiment 2). When the outcome of the subject rats' own choices provided no information about the positions of other baited locations, on the other hand, social cues strongly controlled spatial choices (Experiment 3). These results indicate that social information about the location of food influences spatial choices only when those cues provide valid information that is not redundant with the information provided by other cues. This suggests that social information is learned about, processed, and controls behavior via the same mechanisms as other kinds of stimuli. (c) 2015 APA, all rights reserved).
Gkretsi, Vasiliki; Stylianou, Andreas; Louca, Maria; Stylianopoulos, Triantafyllos
2017-04-18
Breast cancer (BC) is the most common malignant disease in women, with most patients dying from metastasis to distant organs, making discovery of novel metastasis biomarkers and therapeutic targets imperative. Extracellular matrix (ECM)-related adhesion proteins as well as tumor matrix stiffness are important determinants for metastasis. As traditional two-dimensional culture does not take into account ECM stiffness, we employed 3-dimensional collagen I gels of increasing concentration and stiffness to embed BC cells of different invasiveness (MCF-7, MDA-MB-231 and MDA-MB-231-LM2) or tumor spheroids. We tested the expression of cell-ECM adhesion proteins and found that Ras Suppressor-1 (RSU-1) is significantly upregulated in increased stiffness conditions. Interestingly, RSU-1 siRNA-mediated silencing inhibited Urokinase Plasminogen Activator, and metalloproteinase-13, whereas tumor spheroids formed from RSU-1-depleted cells lost their invasive capacity in all cell lines and stiffness conditions. Kaplan-Meier survival plot analysis corroborated our findings showing that high RSU-1 expression is associated with poor prognosis for distant metastasis-free and remission-free survival in BC patients. Taken together, our results indicate the important role of RSU-1 in BC metastasis and set the foundations for its validation as potential BC metastasis marker.
Kolecka, Anna; Khayhan, Kantarawee; Groenewald, Marizeth; Theelen, Bart; Arabatzis, Michael; Velegraki, Aristea; Kostrzewa, Markus; Mares, Mihai; Taj-Aldeen, Saad J.
2013-01-01
Matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) was used for an extensive identification study of arthroconidial yeasts, using 85 reference strains from the CBS-KNAW yeast collection and 134 clinical isolates collected from medical centers in Qatar, Greece, and Romania. The test set included 72 strains of ascomycetous yeasts (Galactomyces, Geotrichum, Saprochaete, and Magnusiomyces spp.) and 147 strains of basidiomycetous yeasts (Trichosporon and Guehomyces spp.). With minimal preparation time, MALDI-TOF MS proved to be an excellent diagnostic tool that provided reliable identification of most (98%) of the tested strains to the species level, with good discriminatory power. The majority of strains were correctly identified at the species level with good scores (>2.0) and seven of the tested strains with log score values between 1.7 and 2.0. The MALDI-TOF MS results obtained were consistent with validated internal transcribed spacer (ITS) and/or large subunit (LSU) ribosomal DNA sequencing results. Expanding the mass spectrum database by increasing the number of reference strains for closely related species, including those of nonclinical origin, should enhance the usefulness of MALDI-TOF MS-based diagnostic analysis of these arthroconidial fungi in medical and other laboratories. PMID:23678074
Mangin, B; Siberchicot, A; Nicolas, S; Doligez, A; This, P; Cierco-Ayrolles, C
2012-03-01
Among the several linkage disequilibrium measures known to capture different features of the non-independence between alleles at different loci, the most commonly used for diallelic loci is the r(2) measure. In the present study, we tackled the problem of the bias of r(2) estimate, which results from the sample structure and/or the relatedness between genotyped individuals. We derived two novel linkage disequilibrium measures for diallelic loci that are both extensions of the usual r(2) measure. The first one, r(S)(2), uses the population structure matrix, which consists of information about the origins of each individual and the admixture proportions of each individual genome. The second one, r(V)(2), includes the kinship matrix into the calculation. These two corrections can be applied together in order to correct for both biases and are defined either on phased or unphased genotypes.We proved that these novel measures are linked to the power of association tests under the mixed linear model including structure and kinship corrections. We validated them on simulated data and applied them to real data sets collected on Vitis vinifera plants. Our results clearly showed the usefulness of the two corrected r(2) measures, which actually captured 'true' linkage disequilibrium unlike the usual r(2) measure.
Pu, Hongbin; Sun, Da-Wen; Ma, Ji; Cheng, Jun-Hu
2015-01-01
The potential of visible and near infrared hyperspectral imaging was investigated as a rapid and nondestructive technique for classifying fresh and frozen-thawed meats by integrating critical spectral and image features extracted from hyperspectral images in the region of 400-1000 nm. Six feature wavelengths (400, 446, 477, 516, 592 and 686 nm) were identified using uninformative variable elimination and successive projections algorithm. Image textural features of the principal component images from hyperspectral images were obtained using histogram statistics (HS), gray level co-occurrence matrix (GLCM) and gray level-gradient co-occurrence matrix (GLGCM). By these spectral and textural features, probabilistic neural network (PNN) models for classification of fresh and frozen-thawed pork meats were established. Compared with the models using the optimum wavelengths only, optimum wavelengths with HS image features, and optimum wavelengths with GLCM image features, the model integrating optimum wavelengths with GLGCM gave the highest classification rate of 93.14% and 90.91% for calibration and validation sets, respectively. Results indicated that the classification accuracy can be improved by combining spectral features with textural features and the fusion of critical spectral and textural features had better potential than single spectral extraction in classifying fresh and frozen-thawed pork meat. Copyright © 2014 Elsevier Ltd. All rights reserved.
PLS-LS-SVM based modeling of ATR-IR as a robust method in detection and qualification of alprazolam
NASA Astrophysics Data System (ADS)
Parhizkar, Elahehnaz; Ghazali, Mohammad; Ahmadi, Fatemeh; Sakhteman, Amirhossein
2017-02-01
According to the United States pharmacopeia (USP), Gold standard technique for Alprazolam determination in dosage forms is HPLC, an expensive and time-consuming method that is not easy to approach. In this study chemometrics assisted ATR-IR was introduced as an alternative method that produce similar results in fewer time and energy consumed manner. Fifty-eight samples containing different concentrations of commercial alprazolam were evaluated by HPLC and ATR-IR method. A preprocessing approach was applied to convert raw data obtained from ATR-IR spectra to normal matrix. Finally, a relationship between alprazolam concentrations achieved by HPLC and ATR-IR data was established using PLS-LS-SVM (partial least squares least squares support vector machines). Consequently, validity of the method was verified to yield a model with low error values (root mean square error of cross validation equal to 0.98). The model was able to predict about 99% of the samples according to R2 of prediction set. Response permutation test was also applied to affirm that the model was not assessed by chance correlations. At conclusion, ATR-IR can be a reliable method in manufacturing process in detection and qualification of alprazolam content.
Adaptive model reduction for continuous systems via recursive rational interpolation
NASA Technical Reports Server (NTRS)
Lilly, John H.
1994-01-01
A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.
Grünhut, Marcos; Garrido, Mariano; Centurión, Maria E; Fernández Band, Beatriz S
2010-07-12
A combination of kinetic spectroscopic monitoring and multivariate curve resolution-alternating least squares (MCR-ALS) was proposed for the enzymatic determination of levodopa (LVD) and carbidopa (CBD) in pharmaceuticals. The enzymatic reaction process was carried out in a reverse stopped-flow injection system and monitored by UV-vis spectroscopy. The spectra (292-600 nm) were recorded throughout the reaction and were analyzed by multivariate curve resolution-alternating least squares. A small calibration matrix containing nine mixtures was used in the model construction. Additionally, to evaluate the prediction ability of the model, a set with six validation mixtures was used. The lack of fit obtained was 4.3%, the explained variance 99.8% and the overall prediction error 5.5%. Tablets of commercial samples were analyzed and the results were validated by pharmacopeia method (high performance liquid chromatography). No significant differences were found (alpha=0.05) between the reference values and the ones obtained with the proposed method. It is important to note that a unique chemometric model made it possible to determine both analytes simultaneously. Copyright 2010 Elsevier B.V. All rights reserved.
Hybrid Soft Soil Tire Model (HSSTM). Part 1: Tire Material and Structure Modeling
2015-04-28
commercially available vehicle simulation packages. Model parameters are obtained using a validated finite element tire model, modal analysis, and other...design of experiment matrix. This data, in addition to modal analysis data were used to validate the tire model. Furthermore, to study the validity...é ë ê ê ê ê ê ê ê ù û ú ú ú ú ú ú ú (78) The applied forces to the rim center consist of the axle forces and suspension forces: FFF Gsuspension G
MISR Level 2 TOA/Cloud Versioning
Atmospheric Science Data Center
2017-10-11
... public release. Add trap singular matrix condition. Add test for invalid look vectors. Use different metadata to test for validity of time tags. Fix incorrectly addressed array. Introduced bug ...
An Automatic and Robust Algorithm of Reestablishment of Digital Dental Occlusion
Chang, Yu-Bing; Xia, James J.; Gateno, Jaime; Xiong, Zixiang; Zhou, Xiaobo; Wong, Stephen T. C.
2017-01-01
In the field of craniomaxillofacial (CMF) surgery, surgical planning can be performed on composite 3-D models that are generated by merging a computerized tomography scan with digital dental models. Digital dental models can be generated by scanning the surfaces of plaster dental models or dental impressions with a high-resolution laser scanner. During the planning process, one of the essential steps is to reestablish the dental occlusion. Unfortunately, this task is time-consuming and often inaccurate. This paper presents a new approach to automatically and efficiently reestablish dental occlusion. It includes two steps. The first step is to initially position the models based on dental curves and a point matching technique. The second step is to reposition the models to the final desired occlusion based on iterative surface-based minimum distance mapping with collision constraints. With linearization of rotation matrix, the alignment is modeled by solving quadratic programming. The simulation was completed on 12 sets of digital dental models. Two sets of dental models were partially edentulous, and another two sets have first premolar extractions for orthodontic treatment. Two validation methods were applied to the articulated models. The results show that using our method, the dental models can be successfully articulated with a small degree of deviations from the occlusion achieved with the gold-standard method. PMID:20529735
FastSKAT: Sequence kernel association tests for very large sets of markers.
Lumley, Thomas; Brody, Jennifer; Peloso, Gina; Morrison, Alanna; Rice, Kenneth
2018-06-22
The sequence kernel association test (SKAT) is widely used to test for associations between a phenotype and a set of genetic variants that are usually rare. Evaluating tail probabilities or quantiles of the null distribution for SKAT requires computing the eigenvalues of a matrix related to the genotype covariance between markers. Extracting the full set of eigenvalues of this matrix (an n×n matrix, for n subjects) has computational complexity proportional to n 3 . As SKAT is often used when n>104, this step becomes a major bottleneck in its use in practice. We therefore propose fastSKAT, a new computationally inexpensive but accurate approximations to the tail probabilities, in which the k largest eigenvalues of a weighted genotype covariance matrix or the largest singular values of a weighted genotype matrix are extracted, and a single term based on the Satterthwaite approximation is used for the remaining eigenvalues. While the method is not particularly sensitive to the choice of k, we also describe how to choose its value, and show how fastSKAT can automatically alert users to the rare cases where the choice may affect results. As well as providing faster implementation of SKAT, the new method also enables entirely new applications of SKAT that were not possible before; we give examples grouping variants by topologically associating domains, and comparing chromosome-wide association by class of histone marker. © 2018 WILEY PERIODICALS, INC.
Data on a Laves phase intermetallic matrix composite in situ toughened by ductile precipitates.
Knowles, Alexander J; Bhowmik, Ayan; Purkayastha, Surajit; Jones, Nicholas G; Giuliani, Finn; Clegg, William J; Dye, David; Stone, Howard J
2017-10-01
The data presented in this article are related to the research article entitled "Laves phase intermetallic matrix composite in situ toughened by ductile precipitates" (Knowles et al.) [1]. The composite comprised a Fe 2 (Mo, Ti) matrix with bcc (Mo, Ti) precipitated laths produced in situ by an aging heat treatment, which was shown to confer a toughening effect (Knowles et al.) [1]. Here, details are given on a focused ion beam (FIB) slice and view experiment performed on the composite so as to determine that the 3D morphology of the bcc (Mo, Ti) precipitates were laths rather than needles. Scanning transmission electron microscopy (S(TEM)) micrographs of the microstructure as well as energy dispersive X-ray spectroscopy (EDX) maps are presented that identify the elemental partitioning between the C14 Laves matrix and the bcc laths, with Mo rejected from the matrix into laths. A TEM selected area diffraction pattern (SADP) and key is provided that was used to validate the orientation relation between the matrix and laths identified in (Knowles et al.) [1] along with details of the transformation matrix determined.
ERIC Educational Resources Information Center
Palmieri, Patrick A.; Smith, Gregory C.
2007-01-01
The authors examined the structural validity of the parent informant version of the Strengths and Difficulties Questionnaire (SDQ) with a sample of 733 custodial grandparents. Three models of the SDQ's factor structure were evaluated with confirmatory factor analysis based on the item covariance matrix. Although indices of fit were good across all…
ERIC Educational Resources Information Center
Khattab, Ali-Maher; And Others
1982-01-01
A causal modeling system, using confirmatory maximum likelihood factor analysis with the LISREL IV computer program, evaluated the construct validity underlying the higher order factor structure of a given correlation matrix of 46 structure-of-intellect tests emphasizing the product of transformations. (Author/PN)
Position Error Covariance Matrix Validation and Correction
NASA Technical Reports Server (NTRS)
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P
2010-10-22
A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Gray, Carl E., Jr.
1988-01-01
Using the Newtonian method, the equations of motion are developed for the coupled bending-torsion steady-state response of beams rotating at constant angular velocity in a fixed plane. The resulting equations are valid to first order strain-displacement relationships for a long beam with all other nonlinear terms retained. In addition, the equations are valid for beams with the mass centroidal axis offset (eccentric) from the elastic axis, nonuniform mass and section properties, and variable twist. The solution of these coupled, nonlinear, nonhomogeneous, differential equations is obtained by modifying a Hunter linear second-order transfer-matrix solution procedure to solve the nonlinear differential equations and programming the solution for a desk-top personal computer. The modified transfer-matrix method was verified by comparing the solution for a rotating beam with a geometric, nonlinear, finite-element computer code solution; and for a simple rotating beam problem, the modified method demonstrated a significant advantage over the finite-element solution in accuracy, ease of solution, and actual computer processing time required to effect a solution.
Rapid Quantitative Determination of Squalene in Shark Liver Oils by Raman and IR Spectroscopy.
Hall, David W; Marshall, Susan N; Gordon, Keith C; Killeen, Daniel P
2016-01-01
Squalene is sourced predominantly from shark liver oils and to a lesser extent from plants such as olives. It is used for the production of surfactants, dyes, sunscreen, and cosmetics. The economic value of shark liver oil is directly related to the squalene content, which in turn is highly variable and species-dependent. Presented here is a validated gas chromatography-mass spectrometry analysis method for the quantitation of squalene in shark liver oils, with an accuracy of 99.0 %, precision of 0.23 % (standard deviation), and linearity of >0.999. The method has been used to measure the squalene concentration of 16 commercial shark liver oils. These reference squalene concentrations were related to infrared (IR) and Raman spectra of the same oils using partial least squares regression. The resultant models were suitable for the rapid quantitation of squalene in shark liver oils, with cross-validation r (2) values of >0.98 and root mean square errors of validation of ≤4.3 % w/w. Independent test set validation of these models found mean absolute deviations of the 4.9 and 1.0 % w/w for the IR and Raman models, respectively. Both techniques were more accurate than results obtained by an industrial refractive index analysis method, which is used for rapid, cheap quantitation of squalene in shark liver oils. In particular, the Raman partial least squares regression was suited to quantitative squalene analysis. The intense and highly characteristic Raman bands of squalene made quantitative analysis possible irrespective of the lipid matrix.
The provisional matrix: setting the stage for tissue repair outcomes.
Barker, Thomas H; Engler, Adam J
2017-07-01
Since its conceptualization in the 1980s, the provisional matrix has often been characterized as a simple fibrin-containing scaffold for wound healing that supports the nascent blood clot and is functionally distinct from the basement membrane. However subsequent advances have shown that this matrix is far from passive, with distinct compositional differences as the wound matures, and providing an active role for wound remodeling. Here we review the stages of this matrix, provide an update on the state of our understanding of provisional matrix, and present some of the outstanding issues related to the provisional matrix, its components, and their assembly and use in vivo. Copyright © 2017. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fachruddin, Imam, E-mail: imam.fachruddin@sci.ui.ac.id; Salam, Agus
2016-03-11
A new momentum-space formulation for scattering of two spin-half particles, both either identical or unidentical, is formulated. As basis states the free linear-momentum states are not expanded into the angular-momentum states, the system’s spin states are described by the product of the spin states of the two particles, and the system’s isospin states by the total isospin states of the two particles. We evaluate the Lippmann-Schwinger equations for the T-matrix elements in these basis states. The azimuthal behavior of the potential and of the T-matrix elements leads to a set of coupled integral equations for the T-matrix elements in twomore » variables only, which are the magnitude of the relative momentum and the scattering angle. Some symmetry relations for the potential and the T-matrix elements reduce the number of the integral equations to be solved. A set of six spin operators to express any interaction of two spin-half particles is introduced. We show the spin-averaged differential cross section as being calculated in terms of the solution of the set of the integral equations.« less
φq-field theory for portfolio optimization: “fat tails” and nonlinear correlations
NASA Astrophysics Data System (ADS)
Sornette, D.; Simonetti, P.; Andersen, J. V.
2000-08-01
Physics and finance are both fundamentally based on the theory of random walks (and their generalizations to higher dimensions) and on the collective behavior of large numbers of correlated variables. The archetype examplifying this situation in finance is the portfolio optimization problem in which one desires to diversify on a set of possibly dependent assets to optimize the return and minimize the risks. The standard mean-variance solution introduced by Markovitz and its subsequent developments is basically a mean-field Gaussian solution. It has severe limitations for practical applications due to the strongly non-Gaussian structure of distributions and the nonlinear dependence between assets. Here, we present in details a general analytical characterization of the distribution of returns for a portfolio constituted of assets whose returns are described by an arbitrary joint multivariate distribution. In this goal, we introduce a non-linear transformation that maps the returns onto Gaussian variables whose covariance matrix provides a new measure of dependence between the non-normal returns, generalizing the covariance matrix into a nonlinear covariance matrix. This nonlinear covariance matrix is chiseled to the specific fat tail structure of the underlying marginal distributions, thus ensuring stability and good conditioning. The portfolio distribution is then obtained as the solution of a mapping to a so-called φq field theory in particle physics, of which we offer an extensive treatment using Feynman diagrammatic techniques and large deviation theory, that we illustrate in details for multivariate Weibull distributions. The interaction (non-mean field) structure in this field theory is a direct consequence of the non-Gaussian nature of the distribution of asset price returns. We find that minimizing the portfolio variance (i.e. the relatively “small” risks) may often increase the large risks, as measured by higher normalized cumulants. Extensive empirical tests are presented on the foreign exchange market that validate satisfactorily the theory. For “fat tail” distributions, we show that an adequate prediction of the risks of a portfolio relies much more on the correct description of the tail structure rather than on their correlations. For the case of asymmetric return distributions, our theory allows us to generalize the return-risk efficient frontier concept to incorporate the dimensions of large risks embedded in the tail of the asset distributions. We demonstrate that it is often possible to increase the portfolio return while decreasing the large risks as quantified by the fourth and higher-order cumulants. Exact theoretical formulas are validated by empirical tests.
Divya, O; Mishra, Ashok K
2007-05-29
Quantitative determination of kerosene fraction present in diesel has been carried out based on excitation emission matrix fluorescence (EEMF) along with parallel factor analysis (PARAFAC) and N-way partial least squares regression (N-PLS). EEMF is a simple, sensitive and nondestructive method suitable for the analysis of multifluorophoric mixtures. Calibration models consisting of varying compositions of diesel and kerosene were constructed and their validation was carried out using leave-one-out cross validation method. The accuracy of the model was evaluated through the root mean square error of prediction (RMSEP) for the PARAFAC, N-PLS and unfold PLS methods. N-PLS was found to be a better method compared to PARAFAC and unfold PLS method because of its low RMSEP values.
Multiple Versus Single Set Validation of Multivariate Models to Avoid Mistakes.
Harrington, Peter de Boves
2018-01-02
Validation of multivariate models is of current importance for a wide range of chemical applications. Although important, it is neglected. The common practice is to use a single external validation set for evaluation. This approach is deficient and may mislead investigators with results that are specific to the single validation set of data. In addition, no statistics are available regarding the precision of a derived figure of merit (FOM). A statistical approach using bootstrapped Latin partitions is advocated. This validation method makes an efficient use of the data because each object is used once for validation. It was reviewed a decade earlier but primarily for the optimization of chemometric models this review presents the reasons it should be used for generalized statistical validation. Average FOMs with confidence intervals are reported and powerful, matched-sample statistics may be applied for comparing models and methods. Examples demonstrate the problems with single validation sets.
Umar, Arzu; Kang, Hyuk; Timmermans, Annemieke M; Look, Maxime P; Meijer-van Gelder, Marion E; den Bakker, Michael A; Jaitly, Navdeep; Martens, John W M; Luider, Theo M; Foekens, John A; Pasa-Tolić, Ljiljana
2009-06-01
Tamoxifen resistance is a major cause of death in patients with recurrent breast cancer. Current clinical factors can correctly predict therapy response in only half of the treated patients. Identification of proteins that are associated with tamoxifen resistance is a first step toward better response prediction and tailored treatment of patients. In the present study we intended to identify putative protein biomarkers indicative of tamoxifen therapy resistance in breast cancer using nano-LC coupled with FTICR MS. Comparative proteome analysis was performed on approximately 5,500 pooled tumor cells (corresponding to approximately 550 ng of protein lysate/analysis) obtained through laser capture microdissection (LCM) from two independently processed data sets (n = 24 and n = 27) containing both tamoxifen therapy-sensitive and therapy-resistant tumors. Peptides and proteins were identified by matching mass and elution time of newly acquired LC-MS features to information in previously generated accurate mass and time tag reference databases. A total of 17,263 unique peptides were identified that corresponded to 2,556 non-redundant proteins identified with > or = 2 peptides. 1,713 overlapping proteins between the two data sets were used for further analysis. Comparative proteome analysis revealed 100 putatively differentially abundant proteins between tamoxifen-sensitive and tamoxifen-resistant tumors. The presence and relative abundance for 47 differentially abundant proteins were verified by targeted nano-LC-MS/MS in a selection of unpooled, non-microdissected discovery set tumor tissue extracts. ENPP1, EIF3E, and GNB4 were significantly associated with progression-free survival upon tamoxifen treatment for recurrent disease. Differential abundance of our top discriminating protein, extracellular matrix metalloproteinase inducer, was validated by tissue microarray in an independent patient cohort (n = 156). Extracellular matrix metalloproteinase inducer levels were higher in therapy-resistant tumors and significantly associated with an earlier tumor progression following first line tamoxifen treatment (hazard ratio, 1.87; 95% confidence interval, 1.25-2.80; p = 0.002). In summary, comparative proteomics performed on laser capture microdissection-derived breast tumor cells using nano-LC-FTICR MS technology revealed a set of putative biomarkers associated with tamoxifen therapy resistance in recurrent breast cancer.
Henning, John A; Coggins, Jamie; Peterson, Matthew
2015-10-06
Hop is an economically important crop for the Pacific Northwest USA as well as other regions of the world. It is a perennial crop with rhizomatous or clonal propagation system for varietal distribution. A big concern for growers as well as brewers is variety purity and questions are regularly posed to public agencies concerning the availability of genotype testing. Current means for genotyping are based upon 25 microsatellites that provides relatively accurate genotyping but cannot always differentiate sister-lines. In addition, numerous PCR runs (25) are required to complete this process and only a few laboratories exist that perform this service. A genotyping protocol based upon SNPs would enable rapid accurate genotyping that can be assayed at any laboratory facility set up for SNP-based genotyping. The results of this study arose from a larger project designed for whole genome association studies upon the USDA-ARS hop germplasm collection consisting of approximately 116 distinct hop varieties and germplasm (female lines) from around the world. The original dataset that arose from partial sequencing of 121 genotypes resulted in the identification of 374,829 SNPs using TASSEL-UNEAK pipeline. After filtering out genotypes with more than 50% missing data (5 genotypes) and SNP markers with more than 20% missing data, 32,206 highly filtered SNP markers across 116 genotypes were identified and considered for this study. Minor allele frequency (MAF) was calculated for each SNP and ranked according to the most informative to least informative. Only those markers without missing data across genotypes as well as 60% or less heterozygous gamete calls were considered for further analysis. Genetic distances among individuals in the study were calculated using the marker with the highest MAF value, then by using a combination of the two markers with highest MAF values and so on. This process was reiterated until a set of markers was identified that allowed for all genotypes in the study to be genetically differentiated from each other. Next, we compared genetic matrices calculated from the minimal marker sets [(Table 2; 6-, 7-, 8-, 10- and 12-marker set matrices] and that of a matrix calculated from a set of markers with no missing data across all 116 samples (1006 SNP markers). The minimum number of markers required to meet both specifications was a set of 7-markers (Table 3). These seven SNPs were then aligned with a genome assembly, and DNA sequence both upstream and downstream were used to identify primer sequences that can be used to develop seven amplicons for high resolution melting curve PCR detection or other SNP-based PCR detection methods. This study identifies a set of 7 SNP markers that may prove useful for the identification and validation of hop varieties and accessions. Variety validation of unknown samples assumes that the variety under question has been included a priori in a discovery panel. These results are based upon in silica studies and markers need to be validated using different SNP marker technology upon a differential set of hop genotypes. The marker sequence data and suggested primer sets provide potential means to fingerprint hop varieties in most genetic laboratories utilizing SNP-marker technology.
Le Châtelier reciprocal relations and the mechanical analog
NASA Astrophysics Data System (ADS)
Gilmore, Robert
1983-08-01
Le Châtelier's principle is discussed carefully in terms of two sets of simple thermodynamic examples. The principle is then formulated quantitatively for general thermodynamic systems. The formulation is in terms of a perturbation-response matrix, the Le Châtelier matrix [L]. Le Châtelier's principle is contained in the diagonal elements of this matrix, all of which exceed one. These matrix elements describe the response of a system to a perturbation of either its extensive or intensive variables. These response ratios are inverses of each other. The Le Châtelier matrix is symmetric, so that a new set of thermodynamic reciprocal relations is derived. This quantitative formulation is illustrated by a single simple example which includes the original examples and shows the reciprocities among them. The assumptions underlying this new quantitative formulation of Le Châtelier's principle are general and applicable to a wide variety of nonthermodynamic systems. Le Châtelier's principle is formulated quantitatively for mechanical systems in static equilibrium, and mechanical examples of this formulation are given.
By Stuart G. Baker The program requires Mathematica 7.01.0 The key function is Classify [datalist,options] where datalist={data, genename, dataname} data ={matrix for class 0, matrix for class 1}, matrix is gene expression by specimen genename a list of names of genes, dataname ={name of data set, name of class0, name of class1} |
Disruption of Methicillin-resistant Staphylococcus aureus Biofilms with Enzymatic Therapeutics
2015-04-29
polysaccharide matrix and bacteria from the growth surface. α-Amylase, bromelain, and papain caused removal of most of the polysaccharide matrix...biofilm EPS matrix, including polysaccharides , proteins, and bacterial/host DNA [21]. While these enzymes have been utilized clinically since the 1940s...clinically or can easily transition to the clinical setting. These enzymes included an anti- polysaccharide agent, α-amylase, an anti-peptidoglycan agent
Different Treatment Stages in Medical Diagnosis using Fuzzy Membership Matrix
NASA Astrophysics Data System (ADS)
Sundaresan, T.; Sheeja, G.; Govindarajan, A.
2018-04-01
The field of medicine is the most important and developing area of applications of fuzzy set theory. The nature of medical documentation and uncertain information gathered in the use of fuzzy triangular matrix. In this paper, procedures are presented for medical diagnosis and treatment-stages, patient and drug is constructed in fuzzy membership matrix. Examples are given to verify the proposed approach.
Universal shocks in the Wishart random-matrix ensemble.
Blaizot, Jean-Paul; Nowak, Maciej A; Warchoł, Piotr
2013-05-01
We show that the derivative of the logarithm of the average characteristic polynomial of a diffusing Wishart matrix obeys an exact partial differential equation valid for an arbitrary value of N, the size of the matrix. In the large N limit, this equation generalizes the simple inviscid Burgers equation that has been obtained earlier for Hermitian or unitary matrices. The solution, through the method of characteristics, presents singularities that we relate to the precursors of shock formation in the Burgers equation. The finite N effects appear as a viscosity term in the Burgers equation. Using a scaling analysis of the complete equation for the characteristic polynomial, in the vicinity of the shocks, we recover in a simple way the universal Bessel oscillations (so-called hard-edge singularities) familiar in random-matrix theory.
Local Geostatistical Models and Big Data in Hydrological and Ecological Applications
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios
2015-04-01
The advent of the big data era creates new opportunities for environmental and ecological modelling but also presents significant challenges. The availability of remote sensing images and low-cost wireless sensor networks implies that spatiotemporal environmental data to cover larger spatial domains at higher spatial and temporal resolution for longer time windows. Handling such voluminous data presents several technical and scientific challenges. In particular, the geostatistical methods used to process spatiotemporal data need to overcome the dimensionality curse associated with the need to store and invert large covariance matrices. There are various mathematical approaches for addressing the dimensionality problem, including change of basis, dimensionality reduction, hierarchical schemes, and local approximations. We present a Stochastic Local Interaction (SLI) model that can be used to model local correlations in spatial data. SLI is a random field model suitable for data on discrete supports (i.e., regular lattices or irregular sampling grids). The degree of localization is determined by means of kernel functions and appropriate bandwidths. The strength of the correlations is determined by means of coefficients. In the "plain vanilla" version the parameter set involves scale and rigidity coefficients as well as a characteristic length. The latter determines in connection with the rigidity coefficient the correlation length of the random field. The SLI model is based on statistical field theory and extends previous research on Spartan spatial random fields [2,3] from continuum spaces to explicitly discrete supports. The SLI kernel functions employ adaptive bandwidths learned from the sampling spatial distribution [1]. The SLI precision matrix is expressed explicitly in terms of the model parameter and the kernel function. Hence, covariance matrix inversion is not necessary for parameter inference that is based on leave-one-out cross validation. This property helps to overcome a significant computational bottleneck of geostatistical models due to the poor scaling of the matrix inversion [4,5]. We present applications to real and simulated data sets, including the Walker lake data, and we investigate the SLI performance using various statistical cross validation measures. References [1] T. Hofmann, B. Schlkopf, A.J. Smola, Annals of Statistics, 36, 1171-1220 (2008). [2] D. T. Hristopulos, SIAM Journal on Scientific Computing, 24(6): 2125-2162 (2003). [3] D. T. Hristopulos and S. N. Elogne, IEEE Transactions on Signal Processing, 57(9): 3475-3487 (2009) [4] G. Jona Lasinio, G. Mastrantonio, and A. Pollice, Statistical Methods and Applications, 22(1):97-112 (2013) [5] Sun, Y., B. Li, and M. G. Genton (2012). Geostatistics for large datasets. In: Advances and Challenges in Space-time Modelling of Natural Events, Lecture Notes in Statistics, pp. 55-77. Springer, Berlin-Heidelberg.
Getzenberg, R H; Coffey, D S
1990-09-01
The DNA of interphase nuclei have very specific three-dimensional organizations that are different in different cell types, and it is possible that this varying DNA organization is responsible for the tissue specificity of gene expression. The nuclear matrix organizes the three-dimensional structure of the DNA and is believed to be involved in the control of gene expression. This study compares the nuclear structural proteins between two sex accessory tissues in the same animal responding to the same androgen stimulation by the differential expression of major tissue-specific secretory proteins. We demonstrate here that the nuclear matrix is tissue specific in the rat ventral prostate and seminal vesicle, and undergoes characteristic alterations in its protein composition upon androgen withdrawal. Three types of nuclear matrix proteins were observed: 1) nuclear matrix proteins that are different and tissue specific in the rat ventral prostate and seminal vesicle, 2) a set of nuclear matrix proteins that either appear or disappear upon androgen withdrawal, and 3) a set of proteins that are common to both the ventral prostate and seminal vesicle and do not change with the hormonal state of the animal. Since the nuclear matrix is known to bind androgen receptors in a tissue- and steroid-specific manner, we propose that the tissue specificity of the nuclear matrix arranges the DNA in a unique conformation, which may be involved in the specific interaction of transcription factors with DNA sequences, resulting in tissue-specific patterns of secretory protein expression.
An Efficient Distributed Compressed Sensing Algorithm for Decentralized Sensor Network.
Liu, Jing; Huang, Kaiyu; Zhang, Guoxian
2017-04-20
We consider the joint sparsity Model 1 (JSM-1) in a decentralized scenario, where a number of sensors are connected through a network and there is no fusion center. A novel algorithm, named distributed compact sensing matrix pursuit (DCSMP), is proposed to exploit the computational and communication capabilities of the sensor nodes. In contrast to the conventional distributed compressed sensing algorithms adopting a random sensing matrix, the proposed algorithm focuses on the deterministic sensing matrices built directly on the real acquisition systems. The proposed DCSMP algorithm can be divided into two independent parts, the common and innovation support set estimation processes. The goal of the common support set estimation process is to obtain an estimated common support set by fusing the candidate support set information from an individual node and its neighboring nodes. In the following innovation support set estimation process, the measurement vector is projected into a subspace that is perpendicular to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of the estimated common support set. We can then search the innovation support set using an orthogonal matching pursuit (OMP) algorithm based on the projected measurement vector and projected sensing matrix. In the proposed DCSMP algorithm, the process of estimating the common component/support set is decoupled with that of estimating the innovation component/support set. Thus, the inaccurately estimated common support set will have no impact on estimating the innovation support set. It is proven that under the condition the estimated common support set contains the true common support set, the proposed algorithm can find the true innovation set correctly. Moreover, since the innovation support set estimation process is independent of the common support set estimation process, there is no requirement for the cardinality of both sets; thus, the proposed DCSMP algorithm is capable of tackling the unknown sparsity problem successfully.
Inverse eigenproblem for R-symmetric matrices and their approximation
NASA Astrophysics Data System (ADS)
Yuan, Yongxin
2009-11-01
Let be a nontrivial involution, i.e., R=R-1[not equal to]±In. We say that is R-symmetric if RGR=G. The set of all -symmetric matrices is denoted by . In this paper, we first give the solvability condition for the following inverse eigenproblem (IEP): given a set of vectors in and a set of complex numbers , find a matrix such that and are, respectively, the eigenvalues and eigenvectors of A. We then consider the following approximation problem: Given an n×n matrix , find such that , where is the solution set of IEP and ||[dot operator]|| is the Frobenius norm. We provide an explicit formula for the best approximation solution by means of the canonical correlation decomposition.
Two-way learning with one-way supervision for gene expression data.
Wong, Monica H T; Mutch, David M; McNicholas, Paul D
2017-03-04
A family of parsimonious Gaussian mixture models for the biclustering of gene expression data is introduced. Biclustering is accommodated by adopting a mixture of factor analyzers model with a binary, row-stochastic factor loadings matrix. This particular form of factor loadings matrix results in a block-diagonal covariance matrix, which is a useful property in gene expression analyses, specifically in biomarker discovery scenarios where blood can potentially act as a surrogate tissue for other less accessible tissues. Prior knowledge of the factor loadings matrix is useful in this application and is reflected in the one-way supervised nature of the algorithm. Additionally, the factor loadings matrix can be assumed to be constant across all components because of the relationship desired between the various types of tissue samples. Parameter estimates are obtained through a variant of the expectation-maximization algorithm and the best-fitting model is selected using the Bayesian information criterion. The family of models is demonstrated using simulated data and two real microarray data sets. The first real data set is from a rat study that investigated the influence of diabetes on gene expression in different tissues. The second real data set is from a human transcriptomics study that focused on blood and immune tissues. The microarray data sets illustrate the biclustering family's performance in biomarker discovery involving peripheral blood as surrogate biopsy material. The simulation studies indicate that the algorithm identifies the correct biclusters, most optimally when the number of observation clusters is known. Moreover, the biclustering algorithm identified biclusters comprised of biologically meaningful data related to insulin resistance and immune function in the rat and human real data sets, respectively. Initial results using real data show that this biclustering technique provides a novel approach for biomarker discovery by enabling blood to be used as a surrogate for hard-to-obtain tissues.
Hanousek, Ondrej; Berger, Torsten W; Prohaska, Thomas
2016-01-01
Analysis of (34)S/(32)S of sulfate in rainwater and soil solutions can be seen as a powerful tool for the study of the sulfur cycle. Therefore, it is considered as a useful means, e.g., for amelioration and calibration of ecological or biogeochemical models. Due to several analytical limitations, mainly caused by low sulfate concentration in rainwater, complex matrix of soil solutions, limited sample volume, and high number of samples in ecosystem studies, a straightforward analytical protocol is required to provide accurate S isotopic data on a large set of diverse samples. Therefore, sulfate separation by anion exchange membrane was combined with precise isotopic measurement by multicollector inductively coupled plasma mass spectrometry (MC ICP-MS). The separation method proved to be able to remove quantitatively sulfate from matrix cations (Ca, K, Na, or Li) which is a precondition in order to avoid a matrix-induced analytical bias in the mass spectrometer. Moreover, sulfate exchange on the resin is capable of preconcentrating sulfate from low concentrated solutions (to factor 3 in our protocol). No significant sulfur isotope fractionation was observed during separation and preconcentration. MC ICP-MS operated at edge mass resolution has enabled the direct (34)S/(32)S analysis of sulfate eluted from the membrane, with an expanded uncertainty U (k = 2) down to 0.3 ‰ (a single measurement). The protocol was optimized and validated using different sulfate solutions and different matrix compositions. The optimized method was applied in a study on solute samples retrieved in a beech (Fagus sylvatica) forest in the Vienna Woods. Both rainwater (precipitation and tree throughfall) and soil solution δ (34)SVCDT ranged between 4 and 6 ‰, the ratio in soil solution being slightly lower. The lower ratio indicates that a considerable portion of the atmospherically deposited sulfate is cycled through the organic S pool before being released to the soil solution. Nearly the same trends and variations were observed in soil solution and rainwater δ (34)SVCDT values showing that sulfate adsorption/desorption are not important processes in the studied soil.
Alahmad, Shoeb; Elfatatry, Hamed M; Mabrouk, Mokhtar M; Hammad, Sherin F; Mansour, Fotouh R
2018-01-01
The development and introduction of combined therapy represent a challenge for analysis due to severe overlapping of their UV spectra in case of spectroscopy or the requirement of a long tedious and high cost separation technique in case of chromatography. Quality control laboratories have to develop and validate suitable analytical procedures in order to assay such multi component preparations. New spectrophotometric methods for the simultaneous determination of simvastatin (SIM) and nicotinic acid (NIA) in binary combinations were developed. These methods are based on chemometric treatment of data, the applied chemometric techniques are multivariate methods including classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS). In these techniques, the concentration data matrix were prepared by using the synthetic mixtures containing SIM and NIA dissolved in ethanol. The absorbance data matrix corresponding to the concentration data matrix was obtained by measuring the absorbance at 12 wavelengths in the range 216 - 240 nm at 2 nm intervals in the zero-order. The spectrophotometric procedures do not require any separation step. The accuracy, precision and the linearity ranges of the methods have been determined and validated by analyzing synthetic mixtures containing the studied drugs. Chemometric spectrophotometric methods have been developed in the present study for the simultaneous determination of simvastatin and nicotinic acid in their synthetic binary mixtures and in their mixtures with possible excipients present in tablet dosage form. The validation was performed successfully. The developed methods have been shown to be accurate, linear, precise, and so simple. The developed methods can be used routinely for the determination dosage form. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Jones, Dustin P; Hanna, William; El-Hamidi, Hamid; Celli, Jonathan P
2014-06-10
The mechanical microenvironment has been shown to act as a crucial regulator of tumor growth behavior and signaling, which is itself remodeled and modified as part of a set of complex, two-way mechanosensitive interactions. While the development of biologically-relevant 3D tumor models have facilitated mechanistic studies on the impact of matrix rheology on tumor growth, the inverse problem of mapping changes in the mechanical environment induced by tumors remains challenging. Here, we describe the implementation of particle-tracking microrheology (PTM) in conjunction with 3D models of pancreatic cancer as part of a robust and viable approach for longitudinally monitoring physical changes in the tumor microenvironment, in situ. The methodology described here integrates a system of preparing in vitro 3D models embedded in a model extracellular matrix (ECM) scaffold of Type I collagen with fluorescently labeled probes uniformly distributed for position- and time-dependent microrheology measurements throughout the specimen. In vitro tumors are plated and probed in parallel conditions using multiwell imaging plates. Drawing on established methods, videos of tracer probe movements are transformed via the Generalized Stokes Einstein Relation (GSER) to report the complex frequency-dependent viscoelastic shear modulus, G*(ω). Because this approach is imaging-based, mechanical characterization is also mapped onto large transmitted-light spatial fields to simultaneously report qualitative changes in 3D tumor size and phenotype. Representative results showing contrasting mechanical response in sub-regions associated with localized invasion-induced matrix degradation as well as system calibration, validation data are presented. Undesirable outcomes from common experimental errors and troubleshooting of these issues are also presented. The 96-well 3D culture plating format implemented in this protocol is conducive to correlation of microrheology measurements with therapeutic screening assays or molecular imaging to gain new insights into impact of treatments or biochemical stimuli on the mechanical microenvironment.
NASA Astrophysics Data System (ADS)
Nischkauer, Winfried; Vanhaecke, Frank; Bernacchi, Sébastien; Herwig, Christoph; Limbeck, Andreas
2014-11-01
Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL- 1 with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with conventional liquid measurements, and by analyzing IAEA-153 reference material (Trace Elements in Milk Powder); a good agreement with the certified value for phosphorus was obtained.
NASA Astrophysics Data System (ADS)
Reinisch, E. C.; Ali, S. T.; Cardiff, M. A.; Morency, C.; Kreemer, C.; Feigl, K. L.; Team, P.
2016-12-01
Time-dependent deformation has been observed at Brady Hot Springs using interferometric synthetic aperture radar (InSAR) [Ali et al. 2016, http://dx.doi.org/10.1016/j.geothermics.2016.01.008]. Our goal is to evaluate multiple competing hypotheses to explain the observed deformation at Brady. To do so requires statistical tests that account for uncertainty. Graph theory is useful for such an analysis of InSAR data [Reinisch, et al. 2016, http://dx.doi.org/10.1007/s00190-016-0934-5]. In particular, the normalized edge Laplacian matrix calculated from the edge-vertex incidence matrix of the graph of the pair-wise data set represents its correlation and leads to a full data covariance matrix in the weighted least squares problem. This formulation also leads to the covariance matrix of the epoch-wise measurements, representing their relative uncertainties. While the formulation in terms of incidence graphs applies to any quantity derived from pair-wise differences, the modulo-2π ambiguity of wrapped phase renders the problem non-linear. The conventional practice is to unwrap InSAR phase before modeling, which can introduce mistakes without increasing the corresponding measurement uncertainty. To address this issue, we are applying Bayesian inference. To build the likelihood, we use three different observables: (a) wrapped phase [e.g., Feigl and Thurber 2009, http://dx.doi.org/10.1111/j.1365-246X.2008.03881.x]; (b) range gradients, as defined by Ali and Feigl [2012, http://dx.doi.org/10.1029/2012GC004112]; and (c) unwrapped phase, i.e. range change in mm, which we validate using GPS data. We apply our method to InSAR data taken over Brady Hot Springs geothermal field in Nevada as part of a project entitled "Poroelastic Tomography by Adjoint Inverse Modeling of Data from Seismology, Geodesy, and Hydrology" (PoroTomo) [ http://geoscience.wisc.edu/feigl/porotomo].
Inductive matrix completion for predicting gene-disease associations.
Natarajan, Nagarajan; Dhillon, Inderjit S
2014-06-15
Most existing methods for predicting causal disease genes rely on specific type of evidence, and are therefore limited in terms of applicability. More often than not, the type of evidence available for diseases varies-for example, we may know linked genes, keywords associated with the disease obtained by mining text, or co-occurrence of disease symptoms in patients. Similarly, the type of evidence available for genes varies-for example, specific microarray probes convey information only for certain sets of genes. In this article, we apply a novel matrix-completion method called Inductive Matrix Completion to the problem of predicting gene-disease associations; it combines multiple types of evidence (features) for diseases and genes to learn latent factors that explain the observed gene-disease associations. We construct features from different biological sources such as microarray expression data and disease-related textual data. A crucial advantage of the method is that it is inductive; it can be applied to diseases not seen at training time, unlike traditional matrix-completion approaches and network-based inference methods that are transductive. Comparison with state-of-the-art methods on diseases from the Online Mendelian Inheritance in Man (OMIM) database shows that the proposed approach is substantially better-it has close to one-in-four chance of recovering a true association in the top 100 predictions, compared to the recently proposed Catapult method (second best) that has <15% chance. We demonstrate that the inductive method is particularly effective for a query disease with no previously known gene associations, and for predicting novel genes, i.e. genes that are previously not linked to diseases. Thus the method is capable of predicting novel genes even for well-characterized diseases. We also validate the novelty of predictions by evaluating the method on recently reported OMIM associations and on associations recently reported in the literature. Source code and datasets can be downloaded from http://bigdata.ices.utexas.edu/project/gene-disease. © The Author 2014. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Yang, Pengliang; Brossier, Romain; Métivier, Ludovic; Virieux, Jean
2016-10-01
In this paper, we study 3-D multiparameter full waveform inversion (FWI) in viscoelastic media based on the generalized Maxwell/Zener body including arbitrary number of attenuation mechanisms. We present a frequency-domain energy analysis to establish the stability condition of a full anisotropic viscoelastic system, according to zero-valued boundary condition and the elastic-viscoelastic correspondence principle: the real-valued stiffness matrix becomes a complex-valued one in Fourier domain when seismic attenuation is taken into account. We develop a least-squares optimization approach to linearly relate the quality factor with the anelastic coefficients by estimating a set of constants which are independent of the spatial coordinates, which supplies an explicit incorporation of the parameter Q in the general viscoelastic wave equation. By introducing the Lagrangian multipliers into the matrix expression of the wave equation with implicit time integration, we build a systematic formulation of multiparameter FWI for full anisotropic viscoelastic wave equation, while the equivalent form of the state and adjoint equation with explicit time integration is available to be resolved efficiently. In particular, this formulation lays the foundation for the inversion of the parameter Q in the time domain with full anisotropic viscoelastic properties. In the 3-D isotropic viscoelastic settings, the anelastic coefficients and the quality factors using bulk and shear moduli parametrization can be related to the counterparts using P and S velocity. Gradients with respect to any other parameter of interest can be found by chain rule. Pioneering numerical validations as well as the real applications of this most generic framework will be carried out to disclose the potential of viscoelastic FWI when adequate high-performance computing resources and the field data are available.
ASCS online fault detection and isolation based on an improved MPCA
NASA Astrophysics Data System (ADS)
Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan
2014-09-01
Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroeninger, Kevin Alexander; /Bonn U.
2004-04-01
Using a data set of 158 and 169 pb{sup -1} of D0 Run-II data in the electron and muon plus jets channel, respectively, the top quark mass has been measured using the Matrix Element Method. The method and its implementation are described. Its performance is studied in Monte Carlo using ensemble tests and the method is applied to the Moriond 2004 data set.
Nonlinear Adjustment with or without Constraints, Applicable to Geodetic Models
1989-03-01
corrections are neglected, resulting in the familiar (linearized) observation equations. In matrix notation, the latter are expressed by V = A X + I...where A is the design matrix, x=X -x is the column-vector of parametric corrections , VzLa-L b is the column-vector of residuals, and L=L -Lb is the...X0 . corresponds to the set ua of model-surface 0 coordinates describing the initial point P. The final set of parametric corrections , X, then
Aguado, Brian A; Caffe, Jordan R; Nanavati, Dhaval; Rao, Shreyas S; Bushnell, Grace G; Azarin, Samira M; Shea, Lonnie D
2016-03-01
Metastatic tumor cells colonize the pre-metastatic niche, which is a complex microenvironment consisting partially of extracellular matrix (ECM) proteins. We sought to identify and validate novel contributors to tumor cell colonization using ECM-coated poly(ε-caprolactone) (PCL) scaffolds as mimics of the pre-metastatic niche. Utilizing orthotopic breast cancer mouse models, fibronectin and collagen IV-coated scaffolds implanted in the subcutaneous space captured colonizing tumor cells, showing a greater than 2-fold increase in tumor cell accumulation at the implant site compared to uncoated scaffolds. As a strategy to identify additional ECM colonization contributors, decellularized matrix (DCM) from lungs and livers containing metastatic tumors were characterized. In vitro, metastatic cell adhesion was increased on DCM coatings from diseased organs relative to healthy DCM. Furthermore, in vivo implantations of diseased DCM-coated scaffolds had increased tumor cell colonization relative to healthy DCM coatings. Mass-spectrometry proteomics was performed on healthy and diseased DCM to identify candidates associated with colonization. Myeloperoxidase was identified as abundantly present in diseased organs and validated as a contributor to colonization using myeloperoxidase-coated scaffold implants. This work identified novel ECM proteins associated with colonization using decellularization and proteomics techniques and validated candidates using a scaffold to mimic the pre-metastatic niche. The pre-metastatic niche consists partially of ECM proteins that promote metastatic cell colonization to a target organ. We present a biomaterials-based approach to mimic this niche and identify ECM mediators of colonization. Using murine breast cancer models, we implanted microporous PCL scaffolds to recruit colonizing tumor cells in vivo. As a strategy to modulate colonization, we coated scaffolds with various ECM proteins, including decellularized lung and liver matrix from tumor-bearing mice. After characterizing the organ matrices using proteomics, myeloperoxidase was identified as an ECM protein contributing to colonization and validated using our scaffold. Our scaffold provides a platform to identify novel contributors to colonization and allows for the capture of colonizing tumor cells for a variety of downstream clinical applications. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Innovative anisotropic phantoms for calibration of diffusion tensor imaging sequences.
Kłodowski, Krzysztof; Krzyżak, Artur Tadeusz
2016-05-01
The paper describes a novel type of anisotropic phantoms designed for b-matrix spatial distribution diffusion tensor imaging (BSD-DTI). Cubic plate anisotropic phantom, cylinder capillary phantom and water reference phantom are described as a complete set necessary for calibration, validation and normalization of BSD-DTI. An innovative design of the phantoms basing on enclosing the anisotropic cores in glass balls filled with liquid made for the first time possible BSD calibration with usage of echo planar imaging (EPI) sequence. Susceptibility artifacts prone to occur in EPI sequences were visibly reduced in the central region of the phantoms. The phantoms were designed for usage in a clinical scanner's head coil, but can be scaled for other coil or scanner types. The phantoms can be also used for a pre-calibration of imaging of other types of phantoms having more specific applications. Copyright © 2015 Elsevier Inc. All rights reserved.
2D data-space cross-gradient joint inversion of MT, gravity and magnetic data
NASA Astrophysics Data System (ADS)
Pak, Yong-Chol; Li, Tonglin; Kim, Gang-Sop
2017-08-01
We have developed a data-space multiple cross-gradient joint inversion algorithm, and validated it through synthetic tests and applied it to magnetotelluric (MT), gravity and magnetic datasets acquired along a 95 km profile in Benxi-Ji'an area of northeastern China. To begin, we discuss a generalized cross-gradient joint inversion for multiple datasets and model parameters sets, and formulate it in data space. The Lagrange multiplier required for the structural coupling in the data-space method is determined using an iterative solver to avoid calculation of the inverse matrix in solving the large system of equations. Next, using model-space and data-space methods, we inverted the synthetic data and field data. Based on our result, the joint inversion in data-space not only delineates geological bodies more clearly than the separate inversion, but also yields nearly equal results with the one in model-space while consuming much less memory.
Sirichai, Somsak; Khanatharana, Proespichaya
2008-09-15
Capillary electrophoresis (CE) with UV detection for the simultaneous and short-time analysis of clenbuterol, salbutamol, procaterol, fenoterol is described and validated. Optimized conditions were found to be a 10 mmoll(-1) borate buffer (pH 10.0), an separation voltage of 19 kV, and a separation temperature of 32 degrees C. Detection was set at 205 nm. Under the optimized conditions, analyses of the four analytes in pharmaceutical and human urine samples were carried out in approximately 1 min. The interference of the sample matrix was not observed. The LOD (limits of detection) defined at S/N of 3:1 was found between 0.5 and 2.0 mgl(-1) for the analytes. The linearity of the detector response was within the range from 2.0 to 30 mgl(-1) with correlation coefficient >0.996.
Improving aircraft composite inspections using optimized reference standards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roach, D.; Dorrell, L.; Kollgaard, J.
1998-10-01
The rapidly increasing use of composites on commercial airplanes coupled with the potential for economic savings associated with their use in aircraft structures means that the demand for composite materials technology will continue to increase. Inspecting these composite structures is a critical element in assuring this continued airworthiness. The FAA`s Airworthiness Assurance NDI Validation Center, in conjunction with the Commercial Aircraft Composite Repair committee, is developing a set of composite reference standards to be used in NDT equipment calibration for accomplishment of damage assessment and post-repair inspection of all commercial aircraft composites. In this program, a series of NDI testsmore » on a matrix of composite aircraft structures and prototype reference standards were completed in order to minimize the number of standards needed to carry out composite inspections on aircraft. Two tasks, related to composite laminates and non-metallic composite honeycomb configurations, were addressed.« less
Accurate polarimeter with multicapture fitting for plastic lens evaluation
NASA Astrophysics Data System (ADS)
Domínguez, Noemí; Mayershofer, Daniel; Garcia, Cristina; Arasa, Josep
2016-02-01
Due to their manufacturing process, plastic injection molded lenses do not achieve a constant density throughout their volume. This change of density introduces tensions in the material, inducing local birefringence, which in turn is translated into a variation of the ordinary and extraordinary refractive indices that can be expressed as a retardation phase plane using the Jones matrix notation. The detection and measurement of the value of the retardation of the phase plane are therefore very useful ways to evaluate the quality of plastic lenses. We introduce a polariscopic device to obtain two-dimensional maps of the tension distribution in the bulk of a lens, based on detection of the local birefringence. In addition to a description of the device and the mathematical approach used, a set of initial measurements is presented that confirms the validity of the developed system for the testing of the uniformity of plastic lenses.
Passivity analysis of memristor-based impulsive inertial neural networks with time-varying delays.
Wan, Peng; Jian, Jigui
2018-03-01
This paper focuses on delay-dependent passivity analysis for a class of memristive impulsive inertial neural networks with time-varying delays. By choosing proper variable transformation, the memristive inertial neural networks can be rewritten as first-order differential equations. The memristive model presented here is regarded as a switching system rather than employing the theory of differential inclusion and set-value map. Based on matrix inequality and Lyapunov-Krasovskii functional method, several delay-dependent passivity conditions are obtained to ascertain the passivity of the addressed networks. In addition, the results obtained here contain those on the passivity for the addressed networks without impulse effects as special cases and can also be generalized to other neural networks with more complex pulse interference. Finally, one numerical example is presented to show the validity of the obtained results. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Robust Stabilization of T-S Fuzzy Stochastic Descriptor Systems via Integral Sliding Modes.
Li, Jinghao; Zhang, Qingling; Yan, Xing-Gang; Spurgeon, Sarah K
2017-09-19
This paper addresses the robust stabilization problem for T-S fuzzy stochastic descriptor systems using an integral sliding mode control paradigm. A classical integral sliding mode control scheme and a nonparallel distributed compensation (Non-PDC) integral sliding mode control scheme are presented. It is shown that two restrictive assumptions previously adopted developing sliding mode controllers for Takagi-Sugeno (T-S) fuzzy stochastic systems are not required with the proposed framework. A unified framework for sliding mode control of T-S fuzzy systems is formulated. The proposed Non-PDC integral sliding mode control scheme encompasses existing schemes when the previously imposed assumptions hold. Stability of the sliding motion is analyzed and the sliding mode controller is parameterized in terms of the solutions of a set of linear matrix inequalities which facilitates design. The methodology is applied to an inverted pendulum model to validate the effectiveness of the results presented.
Shi, Wuxi; Luo, Rui; Li, Baoquan
2017-01-01
In this study, an adaptive fuzzy prescribed performance control approach is developed for a class of uncertain multi-input and multi-output (MIMO) nonlinear systems with unknown control direction and unknown dead-zone inputs. The properties of symmetric matrix are exploited to design adaptive fuzzy prescribed performance controller, and a Nussbaum-type function is incorporated in the controller to estimate the unknown control direction. This method has two prominent advantages: it does not require the priori knowledge of control direction and only three parameters need to be updated on-line for this MIMO systems. It is proved that all the signals in the resulting closed-loop system are bounded and that the tracking errors converge to a small residual set with the prescribed performance bounds. The effectiveness of the proposed approach is validated by simulation results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Singh, Gurpreet; Ravi, Koustuban; Wang, Qian; Ho, Seng-Tiong
2012-06-15
A complex-envelope (CE) alternating-direction-implicit (ADI) finite-difference time-domain (FDTD) approach to treat light-matter interaction self-consistently with electromagnetic field evolution for efficient simulations of active photonic devices is presented for the first time (to our best knowledge). The active medium (AM) is modeled using an efficient multilevel system of carrier rate equations to yield the correct carrier distributions, suitable for modeling semiconductor/solid-state media accurately. To include the AM in the CE-ADI-FDTD method, a first-order differential system involving CE fields in the AM is first set up. The system matrix that includes AM parameters is then split into two time-dependent submatrices that are then used in an efficient ADI splitting formula. The proposed CE-ADI-FDTD approach with AM takes 22% of the time as the approach of the corresponding explicit FDTD, as validated by semiconductor microdisk laser simulations.
The paradox of managing a project-oriented matrix: establishing coherence within chaos.
Greiner, L E; Schein, V E
1981-01-01
Projects that require the flexible coordination of multidisciplinary teams have tended to adopt a matrix structure to accomplish complex tasks. Yet these project-oriented matrix structures themselves require careful coordination if they are to realize the objectives set for them. The authors identify the basic organizational questions that project-oriented matrix organizations must face. They examine the relationship between responsibility and authority; the tradeoffs between economic efficiency and the technical quality of the work produced; and the sensitive issues of managing individualistic, highly trained professionals while also maintaining group cohesiveness.
Park, Douglas L; Coates, Scott; Brewer, Vickery A; Garber, Eric A E; Abouzied, Mohamed; Johnson, Kurt; Ritter, Bruce; McKenzie, Deborah
2005-01-01
Performance Tested Method multiple laboratory validations for the detection of peanut protein in 4 different food matrixes were conducted under the auspices of the AOAC Research Institute. In this blind study, 3 commercially available ELISA test kits were validated: Neogen Veratox for Peanut, R-Biopharm RIDASCREEN FAST Peanut, and Tepnel BioKits for Peanut Assay. The food matrixes used were breakfast cereal, cookies, ice cream, and milk chocolate spiked at 0 and 5 ppm peanut. Analyses of the samples were conducted by laboratories representing industry and international and U.S governmental agencies. All 3 commercial test kits successfully identified spiked and peanut-free samples. The validation study required 60 analyses on test samples at the target level 5 microg peanut/g food and 60 analyses at a peanut-free level, which was designed to ensure that the lower 95% confidence limit for the sensitivity and specificity would not be <90%. The probability that a test sample contains an allergen given a prevalence rate of 5% and a positive test result using a single test kit analysis with 95% sensitivity and 95% specificity, which was demonstrated for these test kits, would be 50%. When 2 test kits are run simultaneously on all samples, the probability becomes 95%. It is therefore recommended that all field samples be analyzed with at least 2 of the validated kits.
Dissociative Electron Attachment to Rovibrationally Excited Molecules
1987-08-31
obtained in some recent papers.4’ - In Sec. IV of the present L,(0, (00 paper we will obtain some general recursion relations among where these matrix... general five-term From the generating function of Hermite polynomials , recursion relation (32) is obtained which is valid for the matrix elements of...for the generation of the functions for increasing 1. One convenient way to evaluate a Q, function is to write it in terms of Gaussian hypergeometric
Prod'hom, Guy; Bizzini, Alain; Durussel, Christian; Bille, Jacques; Greub, Gilbert
2010-04-01
An ammonium chloride erythrocyte-lysing procedure was used to prepare a bacterial pellet from positive blood cultures for direct matrix-assisted laser desorption-ionization time of flight (MALDI-TOF) mass spectrometry analysis. Identification was obtained for 78.7% of the pellets tested. Moreover, 99% of the MALDI-TOF identifications were congruent at the species level when considering valid scores. This fast and accurate method is promising.
CMC Research at NASA Glenn in 2015: Recent Progress and Plans
NASA Technical Reports Server (NTRS)
Grady, Joseph E.
2015-01-01
As part of NASAs Aeronautical Sciences project, Glenn Research Center has developed advanced fiber and matrix constituents for a 2700F CMC for turbine engine applications. Fiber and matrix development and characterization will be reviewed. Resulting improvements in CMC mechanical properties and durability will be summarized. Plans for 2015 will be described, including development and validation of models predicting effects of the engine environment on durability of SiC/SiC composites with Environmental Barrier Coatings
[Design of a risk matrix to assess sterile formulations at health care facilities].
Martín de Rosales Cabrera, A M; López Cabezas, C; García Salom, P
2014-05-01
To design a matrix allowing classifying sterile formulations prepared at the hospital with different risk levels. i) Literature search and critical appraisal of the model proposed by the European Resolution CM/Res Ap(2011)1, ii) Identification of the risk associated to the elaboration process by means of the AMFE methodology (Modal Analysis of Failures and Effects), iii) estimation of the severity associated to the risks detected. After initially trying a model of numeric scoring, the classification matrix was changed to an alphabetical classification, grading each criterion from A to D.Each preparation assessed is given a 6-letter combination with three possible risk levels: low, intermediate, and high. This model was easier for risk assignment, and more reproducible. The final model designed analyzes 6 criteria: formulation process, administration route, the drug's safety profile, amount prepared, distribution, and susceptibility for microbiological contamination.The risk level obtained will condition the requirements of the formulation area, validity time, and storing conditions. The matrix model proposed may help health care institutions to better assess the risk of sterile formulations prepared,and provides information about the acceptable validity time according to the storing conditions and the manufacturing area. Its use will increase the safety level of this procedure as well as help in resources planning and distribution. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
NASA Astrophysics Data System (ADS)
Fernández, Ariel
2013-08-01
A significant episteric ("around a solid") distortion of the hydrogen-bond structure of water is promoted by solutes with nanoscale surface detail and physico-chemical complexity, such as soluble natural proteins. These structural distortions defy analysis because the discrete nature of the solvent at the interface is not upheld by the continuous laws of electrostatics. This work derives and validates an electrostatic equation that governs the episteric distortions of the hydrogen-bond matrix. The equation correlates distortions from bulk-like structural patterns with anomalous polarization components that do not align with the electrostatic field of the solute. The result implies that the interfacial energy stored in the orthogonal polarization correlates with the distortion of the water hydrogen-bond network. The result is validated vis-à-vis experimental data on protein interfacial thermodynamics and is interpreted in terms of the interaction energy between the electrostatic field of the solute and the dipole moment induced by the anomalous polarization of interfacial water. Finally, we consider solutes capable of changing their interface through conformational transitions and introduce a principle of minimal episteric distortion (MED) of the water matrix. We assess the importance of the MED principle in the context of protein folding, concluding that the native fold may be identified topologically with the conformation that minimizes the interfacial tension or disruption of the water matrix.
Yun, Changhong; Dashwood, Wan-Mohaiza; Kwong, Lawrence N; Gao, Song; Yin, Taijun; Ling, Qinglan; Singh, Rashim; Dashwood, Roderick H; Hu, Ming
2018-01-30
An accurate and reliable UPLC-MS/MS method is reported for the quantification of endogenous Prostaglandin E2 (PGE 2 ) in rat colonic mucosa and polyps. This method adopted the "surrogate analyte plus authentic bio-matrix" approach, using two different stable isotopic labeled analogs - PGE 2 -d9 as the surrogate analyte and PGE 2 -d4 as the internal standard. A quantitative standard curve was constructed with the surrogate analyte in colonic mucosa homogenate, and the method was successfully validated with the authentic bio-matrix. Concentrations of endogenous PGE 2 in both normal and inflammatory tissue homogenates were back-calculated based on the regression equation. Because of no endogenous interference on the surrogate analyte determination, the specificity was particularly good. By using authentic bio-matrix for validation, the matrix effect and exaction recovery are identically same for the quantitative standard curve and actual samples - this notably increased the assay accuracy. The method is easy, fast, robust and reliable for colon PGE 2 determination. This "surrogate analyte" approach was applied to measure the Pirc (an Apc-mutant rat kindred that models human FAP) mucosa and polyps PGE 2 , one of the strong biomarkers of colorectal cancer. A similar concept could be applied to endogenous biomarkers in other tissues. Copyright © 2017 Elsevier B.V. All rights reserved.
Simulating Matrix Crack and Delamination Interaction in a Clamped Tapered Beam
NASA Technical Reports Server (NTRS)
De Carvalho, N. V.; Seshadri, B. R.; Ratcliffe, J. G.; Mabson, G. E.; Deobald, L. R.
2017-01-01
Blind predictions were conducted to validate a discrete crack methodology based on the Floating Node Method to simulate matrix-crack/delamination interaction. The main novel aspects of the approach are: (1) the implementation of the floating node method via an 'extended interface element' to represent delaminations, matrix-cracks and their interaction, (2) application of directional cohesive elements to infer overall delamination direction, and (3) use of delamination direction and stress state at the delamination front to determine migration onset. Overall, good agreement was obtained between simulations and experiments. However, the validation exercise revealed the strong dependence of the simulation of matrix-crack/delamination interaction on the strength data (in this case transverse interlaminar strength, YT) used within the cohesive zone approach applied in this work. This strength value, YT, is itself dependent on the test geometry from which the strength measurement is taken. Thus, choosing an appropriate strength value becomes an ad-hoc step. As a consequence, further work is needed to adequately characterize and assess the accuracy and adequacy of cohesive zone approaches to model small crack growth and crack onset. Additionally, often when simulating damage progression with cohesive zone elements, the strength is lowered while keeping the fracture toughness constant to enable the use of coarser meshes. Results from the present study suggest that this approach is not recommended for any problem involving crack initiation, small crack growth or multiple crack interaction.
Frequency domain system identification methods - Matrix fraction description approach
NASA Technical Reports Server (NTRS)
Horta, Luca G.; Juang, Jer-Nan
1993-01-01
This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.
National programmes for validating physician competence and fitness for practice: a scoping review.
Horsley, Tanya; Lockyer, Jocelyn; Cogo, Elise; Zeiter, Jeanie; Bursey, Ford; Campbell, Craig
2016-04-15
To explore and categorise the state of existing literature for national programmes designed to affirm or establish the continuing competence of physicians. Scoping review. MEDLINE, ERIC, Sociological Abstracts, web/grey literature (2000-2014). Included when a record described a (1) national-level physician validation system, (2) recognised as a system for affirming competence and (3) reported relevant data. Using bibliographic software, title and abstracts were reviewed using an assessment matrix to ensure duplicate, paired screening. Dyads included both a methodologist and content expert on each assessment, reflective of evidence-informed best practices to decrease errors. 45 reports were included. Publication dates ranged from 2002 to 2014 with the majority of publications occurring in the previous six years (n=35). Country of origin--defined as that of the primary author--included the USA (N=32), the UK (N=8), Canada (N=3), Kuwait (N=1) and Australia (N=1). Three broad themes emerged from this heterogeneous data set: contemporary national programmes, contextual factors and terminological consistency. Four national physician validation systems emerged from the data: the American Board of Medical Specialties Maintenance of Certification Program, the Federation of State Medical Boards Maintenance of Licensure Program, the Canadian Revalidation Program and the UK Revalidation Program. Three contextual factors emerged as stimuli for the implementation of national validation systems: medical regulation, quality of care and professional competence. Finally, great variation among the definitions of key terms was identified. There is an emerging literature focusing on national physician validation systems. Four major systems have been implemented in recent years and it is anticipated that more will follow. Much of this work is descriptive, and gaps exist for the extent to which systems build on current evidence or theory. Terminology is highly variable across programmes for validating physician competence and fitness for practice. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
National programmes for validating physician competence and fitness for practice: a scoping review
Horsley, Tanya; Lockyer, Jocelyn; Cogo, Elise; Zeiter, Jeanie; Bursey, Ford; Campbell, Craig
2016-01-01
Objective To explore and categorise the state of existing literature for national programmes designed to affirm or establish the continuing competence of physicians. Design Scoping review. Data sources MEDLINE, ERIC, Sociological Abstracts, web/grey literature (2000–2014). Selection Included when a record described a (1) national-level physician validation system, (2) recognised as a system for affirming competence and (3) reported relevant data. Data extraction Using bibliographic software, title and abstracts were reviewed using an assessment matrix to ensure duplicate, paired screening. Dyads included both a methodologist and content expert on each assessment, reflective of evidence-informed best practices to decrease errors. Results 45 reports were included. Publication dates ranged from 2002 to 2014 with the majority of publications occurring in the previous six years (n=35). Country of origin—defined as that of the primary author—included the USA (N=32), the UK (N=8), Canada (N=3), Kuwait (N=1) and Australia (N=1). Three broad themes emerged from this heterogeneous data set: contemporary national programmes, contextual factors and terminological consistency. Four national physician validation systems emerged from the data: the American Board of Medical Specialties Maintenance of Certification Program, the Federation of State Medical Boards Maintenance of Licensure Program, the Canadian Revalidation Program and the UK Revalidation Program. Three contextual factors emerged as stimuli for the implementation of national validation systems: medical regulation, quality of care and professional competence. Finally, great variation among the definitions of key terms was identified. Conclusions There is an emerging literature focusing on national physician validation systems. Four major systems have been implemented in recent years and it is anticipated that more will follow. Much of this work is descriptive, and gaps exist for the extent to which systems build on current evidence or theory. Terminology is highly variable across programmes for validating physician competence and fitness for practice. PMID:27084276
van der Ploeg, Tjeerd; Nieboer, Daan; Steyerberg, Ewout W
2016-10-01
Prediction of medical outcomes may potentially benefit from using modern statistical modeling techniques. We aimed to externally validate modeling strategies for prediction of 6-month mortality of patients suffering from traumatic brain injury (TBI) with predictor sets of increasing complexity. We analyzed individual patient data from 15 different studies including 11,026 TBI patients. We consecutively considered a core set of predictors (age, motor score, and pupillary reactivity), an extended set with computed tomography scan characteristics, and a further extension with two laboratory measurements (glucose and hemoglobin). With each of these sets, we predicted 6-month mortality using default settings with five statistical modeling techniques: logistic regression (LR), classification and regression trees, random forests (RFs), support vector machines (SVM) and neural nets. For external validation, a model developed on one of the 15 data sets was applied to each of the 14 remaining sets. This process was repeated 15 times for a total of 630 validations. The area under the receiver operating characteristic curve (AUC) was used to assess the discriminative ability of the models. For the most complex predictor set, the LR models performed best (median validated AUC value, 0.757), followed by RF and support vector machine models (median validated AUC value, 0.735 and 0.732, respectively). With each predictor set, the classification and regression trees models showed poor performance (median validated AUC value, <0.7). The variability in performance across the studies was smallest for the RF- and LR-based models (inter quartile range for validated AUC values from 0.07 to 0.10). In the area of predicting mortality from TBI, nonlinear and nonadditive effects are not pronounced enough to make modern prediction methods beneficial. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Oberhofer, Harald; Blumberger, Jochen
2010-12-01
We present a plane wave basis set implementation for the calculation of electronic coupling matrix elements of electron transfer reactions within the framework of constrained density functional theory (CDFT). Following the work of Wu and Van Voorhis [J. Chem. Phys. 125, 164105 (2006)], the diabatic wavefunctions are approximated by the Kohn-Sham determinants obtained from CDFT calculations, and the coupling matrix element calculated by an efficient integration scheme. Our results for intermolecular electron transfer in small systems agree very well with high-level ab initio calculations based on generalized Mulliken-Hush theory, and with previous local basis set CDFT calculations. The effect of thermal fluctuations on the coupling matrix element is demonstrated for intramolecular electron transfer in the tetrathiafulvalene-diquinone (Q-TTF-Q-) anion. Sampling the electronic coupling along density functional based molecular dynamics trajectories, we find that thermal fluctuations, in particular the slow bending motion of the molecule, can lead to changes in the instantaneous electron transfer rate by more than an order of magnitude. The thermal average, ( {< {| {H_ab } |^2 } > } )^{1/2} = 6.7 {mH}, is significantly higher than the value obtained for the minimum energy structure, | {H_ab } | = 3.8 {mH}. While CDFT in combination with generalized gradient approximation (GGA) functionals describes the intermolecular electron transfer in the studied systems well, exact exchange is required for Q-TTF-Q- in order to obtain coupling matrix elements in agreement with experiment (3.9 mH). The implementation presented opens up the possibility to compute electronic coupling matrix elements for extended systems where donor, acceptor, and the environment are treated at the quantum mechanical (QM) level.
Determination of the optimal number of components in independent components analysis.
Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N
2018-03-01
Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Matrix Dominated Failure of Fiber-Reinforced Composite Laminates Under Static and Dynamic Loading
NASA Astrophysics Data System (ADS)
Schaefer, Joseph Daniel
Hierarchical material systems provide the unique opportunity to connect material knowledge to solving specific design challenges. Representing the quickest growing class of hierarchical materials in use, fiber-reinforced polymer composites (FRPCs) offer superior strength and stiffness-to-weight ratios, damage tolerance, and decreasing production costs compared to metals and alloys. However, the implementation of FRPCs has historically been fraught with inadequate knowledge of the material failure behavior due to incomplete verification of recent computational constitutive models and improper (or non-existent) experimental validation, which has severely slowed creation and development. Noted by the recent Materials Genome Initiative and the Worldwide Failure Exercise, current state of the art qualification programs endure a 20 year gap between material conceptualization and implementation due to the lack of effective partnership between computational coding (simulation) and experimental characterization. Qualification processes are primarily experiment driven; the anisotropic nature of composites predisposes matrix-dominant properties to be sensitive to strain rate, which necessitates extensive testing. To decrease the qualification time, a framework that practically combines theoretical prediction of material failure with limited experimental validation is required. In this work, the Northwestern Failure Theory (NU Theory) for composite lamina is presented as the theoretical basis from which the failure of unidirectional and multidirectional composite laminates is investigated. From an initial experimental characterization of basic lamina properties, the NU Theory is employed to predict the matrix-dependent failure of composites under any state of biaxial stress from quasi-static to 1000 s-1 strain rates. It was found that the number of experiments required to characterize the strain-rate-dependent failure of a new composite material was reduced by an order of magnitude, and the resulting strain-rate-dependence was applicable for a large class of materials. The presented framework provides engineers with the capability to quickly identify fiber and matrix combinations for a given application and determine the failure behavior over the range of practical loadings cases. The failure-mode-based NU Theory may be especially useful when partnered with computational approaches (which often employ micromechanics to determine constituent and constitutive response) to provide accurate validation of the matrix-dominated failure modes experienced by laminates during progressive failure.
Situating Standard Setting within Argument-Based Validity
ERIC Educational Resources Information Center
Papageorgiou, Spiros; Tannenbaum, Richard J.
2016-01-01
Although there has been substantial work on argument-based approaches to validation as well as standard-setting methodologies, it might not always be clear how standard setting fits into argument-based validity. The purpose of this article is to address this lack in the literature, with a specific focus on topics related to argument-based…
NASA Astrophysics Data System (ADS)
Trinchero, P.; Löfgren, M.; Bosbach, D.; Deissmann, G.; Ebrahimi, H.; Gylling, B.; Molinero, J.; Puigdomenech, I.; Selroos, J. O.; Sidborn, M.; Svensson, U.
2017-12-01
The matrix of crystalline rocks is typically constituted by mineral grains with characteristic sizes that vary from mm-scale (or less) up to cm-scale. These mineral grains are separated and intersected by micro-fractures, which build the so-called inter-granular space. Here, we present a generic model of the crystalline rock matrix, which is built upon a micro-Discrete Fracture Network (micro-DFN). To mimic the multiscale nature of grains and inter-granular space, different sets of micro-fractures are employed, each having a different length interval and intensity. The occurrence of these fracture sets is described by Poisson distributions, while the fracture aperture in these sets defines the porosity of the rock matrix. The proposed micro-DFN model is tested and calibrated against experimental observations from Forsmark (Sweden) and the resulting system is used to carry out numerical experiments aimed at assessing the redox buffering capacity of the heterogeneous crystalline rock matrix against the infiltration of glacial oxygenated melt-water. The chemically reactive mineral considered in this study is biotite, whose distribution is simulated with a single stochastic realization that honors the average abundance and grain size observed in mineralogical studies of Forsmark. The exposed surface area of biotite grains, which provide a source of ferrous ions that are in turn oxidized by the dissolved oxygen, is related to the underlying micro-DFN. The results of the mechanistic reactive transport simulations are compared to an existing analytical solution based on the assumption of homogeneity. This evaluation shows that the matrix indeed behaves as a composite system, with most of the oxygen being consumed in "highly reactive pathways" and a non negligible part of the oxygen diffuses deeper into the matrix. Sensitivity analyses to diffusivity show that this effect is more pronounced at high Damköhler numbers (diffusion limited regime) while at lower Damköhler numbers the solution approaches that predicted by the homogeneous model.
Whitby Mudstone, flow from matrix to fractures
NASA Astrophysics Data System (ADS)
Houben, Maartje; Hardebol, Nico; Barnhoorn, Auke; Boersma, Quinten; Peach, Colin; Bertotti, Giovanni; Drury, Martyn
2016-04-01
Fluid flow from matrix to well in shales would be faster if we account for the duality of the permeable medium considering a high permeable fracture network together with a tight matrix. To investigate how long and how far a gas molecule would have to travel through the matrix until it reaches an open connected fracture we investigated the permeability of the Whitby Mudstone (UK) matrix in combination with mapping the fracture network present in the current outcrops of the Whitby Mudstone at the Yorkshire coast. Matrix permeability was measured perpendicular to the bedding using a pressure step decay method on core samples and permeability values are in the microdarcy range. The natural fracture network present in the pavement shows a connected network with dominant NS and EW strikes, where the NS fractures are the main fracture set with an orthogonal fracture set EW. Fracture spacing relations in the pavements show that the average distance to the nearest fracture varies between 7 cm (EW) and 14 cm (NS), where 90% of the matrix is 30 cm away from the nearest fracture. By making some assumptions like; fracture network at depth is similar to what is exposed in the current pavements and open to flow, fracture network is at hydrostatic pressure at 3 km depth, overpressure between matrix and fractures is 10% and a matrix permeability perpendicular to the bedding of 0.1 microdarcy, we have calculated the time it takes for a gas molecule to travel to the nearest fracture. These input values give travel times up to 8 days for a distance of 14 cm. If the permeability is changed to 1 nanodarcy or 10 microdarcy travel times change to 2.2 years or 2 hours respectively.
Bai, Xiaoming; Bessa, Miguel A.; Melro, Antonio R.; ...
2016-10-01
The authors would like to inform that one of the modifications proposed in the article “High-fidelity micro-scale modeling of the thermo-visco-plastic behavior of carbon fiber polymer matrix composites” [1] was found to be unnecessary: the paraboloid yield criterion is sufficient to describe the shear behavior of the epoxy matrix considered (Epoxy 3501-6). The authors recently noted that the experimental work [2] used to validate the pure matrix response considered engineering shear strain instead of its tensorial counter-part, which caused the apparent inconsistency with the paraboloid yield criterion. A recently proposed temperature dependency law for glassy polymers is evaluated herein, thusmore » better agreement with the experimental results for this epoxy is observed.« less
Exact solution for four-order acousto-optic Bragg diffraction with arbitrary initial conditions.
Pieper, Ron; Koslover, Deborah; Poon, Ting-Chung
2009-03-01
An exact solution to the four-order acousto-optic (AO) Bragg diffraction problem with arbitrary initial conditions compatible with exact Bragg angle incident light is developed. The solution, obtained by solving a 4th-order differential equation, is formalized into a transition matrix operator predicting diffracted light orders at the exit of the AO cell in terms of the same diffracted light orders at the entrance. It is shown that the transition matrix is unitary and that this unitary matrix condition is sufficient to guarantee energy conservation. A comparison of analytical solutions with numerical predictions validates the formalism. Although not directly related to the approach used to obtain the solution, it was discovered that all four generated eigenvalues from the four-order AO differential matrix operator are expressed simply in terms of Euclid's Divine Proportion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Xiaoming; Bessa, Miguel A.; Melro, Antonio R.
The authors would like to inform that one of the modifications proposed in the article “High-fidelity micro-scale modeling of the thermo-visco-plastic behavior of carbon fiber polymer matrix composites” [1] was found to be unnecessary: the paraboloid yield criterion is sufficient to describe the shear behavior of the epoxy matrix considered (Epoxy 3501-6). The authors recently noted that the experimental work [2] used to validate the pure matrix response considered engineering shear strain instead of its tensorial counter-part, which caused the apparent inconsistency with the paraboloid yield criterion. A recently proposed temperature dependency law for glassy polymers is evaluated herein, thusmore » better agreement with the experimental results for this epoxy is observed.« less
Desbiens, Raphaël; Tremblay, Pierre; Genest, Jérôme; Bouchard, Jean-Pierre
2006-01-20
The instrument line shape (ILS) of a Fourier-transform spectrometer is expressed in a matrix form. For all line shape effects that scale with wavenumber, the ILS matrix is shown to be transposed in the spectral and interferogram domains. The novel representation of the ILS matrix in the interferogram domain yields an insightful physical interpretation of the underlying process producing self-apodization. Working in the interferogram domain circumvents the problem of taking into account the effects of finite optical path difference and permits a proper discretization of the equations. A fast algorithm in O(N log2 N), based on the fractional Fourier transform, is introduced that permits the application of a constant resolving power line shape to theoretical spectra or forward models. The ILS integration formalism is validated with experimental data.
NASA Astrophysics Data System (ADS)
Kasiviswanathan, Shiva Prasad; Pan, Feng
In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove a set of k matrix columns that minimizes in the residual matrix the sum of the row values, where the value of a row is defined to be the largest entry in that row. This combinatorial problem is closely related to bipartite network interdiction problem that can be applied to minimize the probability that an adversary can successfully smuggle weapons. After introducing the matrix interdiction problem, we study the computational complexity of this problem. We show that the matrix interdiction problem is NP-hard and that there exists a constant γ such that it is even NP-hard to approximate this problem within an n γ additive factor. We also present an algorithm for this problem that achieves an (n - k) multiplicative approximation ratio.
NASA Astrophysics Data System (ADS)
Justino, Júlia
2017-06-01
Matrices with coefficients having uncertainties of type o (.) or O (.), called flexible matrices, are studied from the point of view of nonstandard analysis. The uncertainties of the afore-mentioned kind will be given in the form of the so-called neutrices, for instance the set of all infinitesimals. Since flexible matrices have uncertainties in their coefficients, it is not possible to define the identity matrix in an unique way and so the notion of spectral identity matrix arises. Not all nonsingular flexible matrices can be turned into a spectral identity matrix using Gauss-Jordan elimination method, implying that that not all nonsingular flexible matrices have the inverse matrix. Under certain conditions upon the size of the uncertainties appearing in a nonsingular flexible matrix, a general theorem concerning the boundaries of its minors is presented which guarantees the existence of the inverse matrix of a nonsingular flexible matrix.
Wieghaus, Kristen A.; Gianchandani, Erwin P.; Neal, Rebekah A.; Paige, Mikell A.; Brown, Milton L.; Papin, Jason A.; Botchwey, Edward A.
2009-01-01
We are creating synthetic pharmaceuticals with angiogenic activity and potential to promote vascular invasion. We previously demonstrated that one of these molecules, phthalimide neovascular factor 1 (PNF1), significantly expands microvascular networks in vivo following sustained release from poly(lactic-co-glycolic acid) (PLAGA) films. In addition, to probe PNF1 mode-of-action, we recently applied a novel pathway-based compendium analysis to a multi-timepoint, controlled microarray dataset of PNF1-treated (versus control) human microvascular endothelial cells (HMVECs), and we identified induction of tumor necrosis factor-alpha (TNF-α) and, subsequently, transforming growth factor-beta (TGF-β) signaling networks by PNF1. Here we validate this microarray data-set with quantitative real-time polymerase chain reaction (RT-PCR) analysis. Subsequently, we probe this dataset and identify three specific TGF-β-induced genes with regulation by PNF1 conserved over multiple timepoints—amyloid beta (A4) precursor protein (APP), early growth response 1 (EGR-1), and matrix metalloproteinase 14 (MMP14 or MT1-MMP)—that are also implicated in angiogenesis. We further focus on MMP14 given its unique role in angiogenesis, and we validate MT1-MMP modulation by PNF1 with an in vitro fluorescence assay that demonstrates the direct effects that PNF1 exerts on functional metalloproteinase activity. We also utilize endothelial cord formation in collagen gels to show that PNF1-induced stimulation of endothelial cord network formation in vitro is in some way MT1-MMP-dependent. Ultimately, this new network analysis of our transcriptional footprint characterizing PNF1 activity 1–48 h post-supplementation in HMVECs coupled with corresponding validating experiments suggests a key set of a few specific targets that are involved in PNF1 mode-of-action and important for successful promotion of the neovascularization that we have observed by the drug in vivo. PMID:19326468
NASA Astrophysics Data System (ADS)
Stotsky, Jay A.; Hammond, Jason F.; Pavlovsky, Leonid; Stewart, Elizabeth J.; Younger, John G.; Solomon, Michael J.; Bortz, David M.
2016-07-01
The goal of this work is to develop a numerical simulation that accurately captures the biomechanical response of bacterial biofilms and their associated extracellular matrix (ECM). In this, the second of a two-part effort, the primary focus is on formally presenting the heterogeneous rheology Immersed Boundary Method (hrIBM) and validating our model by comparison to experimental results. With this extension of the Immersed Boundary Method (IBM), we use the techniques originally developed in Part I ([19]) to treat biofilms as viscoelastic fluids possessing variable rheological properties anchored to a set of moving locations (i.e., the bacteria locations). In particular, we incorporate spatially continuous variable viscosity and density fields into our model. Although in [14,15], variable viscosity is used in an IBM context to model discrete viscosity changes across interfaces, to our knowledge this work and Part I are the first to apply the IBM to model a continuously variable viscosity field. We validate our modeling approach from Part I by comparing dynamic moduli and compliance moduli computed from our model to data from mechanical characterization experiments on Staphylococcus epidermidis biofilms. The experimental setup is described in [26] in which biofilms are grown and tested in a parallel plate rheometer. In order to initialize the positions of bacteria in the biofilm, experimentally obtained three dimensional coordinate data was used. One of the major conclusions of this effort is that treating the spring-like connections between bacteria as Maxwell or Zener elements provides good agreement with the mechanical characterization data. We also found that initializing the simulations with different coordinate data sets only led to small changes in the mechanical characterization results. Matlab code used to produce results in this paper will be available at https://github.com/MathBioCU/BiofilmSim.
A rough set approach for determining weights of decision makers in group decision making
Yang, Qiang; Du, Ping-an; Wang, Yong; Liang, Bin
2017-01-01
This study aims to present a novel approach for determining the weights of decision makers (DMs) based on rough group decision in multiple attribute group decision-making (MAGDM) problems. First, we construct a rough group decision matrix from all DMs’ decision matrixes on the basis of rough set theory. After that, we derive a positive ideal solution (PIS) founded on the average matrix of rough group decision, and negative ideal solutions (NISs) founded on the lower and upper limit matrixes of rough group decision. Then, we obtain the weight of each group member and priority order of alternatives by using relative closeness method, which depends on the distances from each individual group member’ decision to the PIS and NISs. Through comparisons with existing methods and an on-line business manager selection example, the proposed method show that it can provide more insights into the subjectivity and vagueness of DMs’ evaluations and selections. PMID:28234974
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
Goal setting as an outcome measure: A systematic review.
Hurn, Jane; Kneebone, Ian; Cropley, Mark
2006-09-01
Goal achievement has been considered to be an important measure of outcome by clinicians working with patients in physical and neurological rehabilitation settings. This systematic review was undertaken to examine the reliability, validity and sensitivity of goal setting and goal attainment scaling approaches when used with working age and older people. To review the reliability, validity and sensitivity of both goal setting and goal attainment scaling when employed as an outcome measure within a physical and neurological working age and older person rehabilitation environment, by examining the research literature covering the 36 years since goal-setting theory was proposed. Data sources included a computer-aided literature search of published studies examining the reliability, validity and sensitivity of goal setting/goal attainment scaling, with further references sourced from articles obtained through this process. There is strong evidence for the reliability, validity and sensitivity of goal attainment scaling. Empirical support was found for the validity of goal setting but research demonstrating its reliability and sensitivity is limited. Goal attainment scaling appears to be a sound measure for use in physical rehabilitation settings with working age and older people. Further work needs to be carried out with goal setting to establish its reliability and sensitivity as a measurement tool.
NASA Astrophysics Data System (ADS)
Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-01
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.
Predicting drug-target interactions by dual-network integrated logistic matrix factorization
NASA Astrophysics Data System (ADS)
Hao, Ming; Bryant, Stephen H.; Wang, Yanli
2017-01-01
In this work, we propose a dual-network integrated logistic matrix factorization (DNILMF) algorithm to predict potential drug-target interactions (DTI). The prediction procedure consists of four steps: (1) inferring new drug/target profiles and constructing profile kernel matrix; (2) diffusing drug profile kernel matrix with drug structure kernel matrix; (3) diffusing target profile kernel matrix with target sequence kernel matrix; and (4) building DNILMF model and smoothing new drug/target predictions based on their neighbors. We compare our algorithm with the state-of-the-art method based on the benchmark dataset. Results indicate that the DNILMF algorithm outperforms the previously reported approaches in terms of AUPR (area under precision-recall curve) and AUC (area under curve of receiver operating characteristic) based on the 5 trials of 10-fold cross-validation. We conclude that the performance improvement depends on not only the proposed objective function, but also the used nonlinear diffusion technique which is important but under studied in the DTI prediction field. In addition, we also compile a new DTI dataset for increasing the diversity of currently available benchmark datasets. The top prediction results for the new dataset are confirmed by experimental studies or supported by other computational research.
Soares, Cristina M Dias; Alves, Rita C; Casal, Susana; Oliveira, M Beatriz P P; Fernandes, José Oliveira
2010-04-01
The present study describes the development and validation of a new method based on a matrix solid-phase dispersion (MSPD) sample preparation procedure followed by GC-MS for determination of acrylamide levels in coffee (ground coffee and brewed coffee) and coffee substitute samples. Samples were dispersed in C(18) sorbent and the mixture was further packed into a preconditioned custom-made ISOLUTE bilayered SPE column (C(18)/Multimode; 1 g + 1 g). Acrylamide was subsequently eluted with water, and then derivatized with bromine and quantified by GC-MS in SIM mode. The MSPD/GC-MS method presented a LOD of 5 microg/kg and a LOQ of 10 microg/kg. Intra and interday precisions ranged from 2% to 4% and 4% to 10%, respectively. To evaluate the performance of the method, 11 samples of ground and brewed coffee and coffee substitutes were simultaneously analyzed by the developed method and also by a previously validated method based in a liquid-extraction (LE) procedure, and the results were compared showing a high correlation between them.
External Standards or Standard Addition? Selecting and Validating a Method of Standardization
NASA Astrophysics Data System (ADS)
Harvey, David T.
2002-05-01
A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.
Loeschner, Katrin; Navratilova, Jana; Grombe, Ringo; Linsinger, Thomas P J; Købler, Carsten; Mølhave, Kristian; Larsen, Erik H
2015-08-15
Nanomaterials are increasingly used in food production and packaging, and validated methods for detection of nanoparticles (NPs) in foodstuffs need to be developed both for regulatory purposes and product development. Asymmetric flow field-flow fractionation with inductively coupled plasma mass spectrometric detection (AF(4)-ICP-MS) was applied for quantitative analysis of silver nanoparticles (AgNPs) in a chicken meat matrix following enzymatic sample preparation. For the first time an analytical validation of nanoparticle detection in a food matrix by AF(4)-ICP-MS has been carried out and the results showed repeatable and intermediately reproducible determination of AgNP mass fraction and size. The findings demonstrated the potential of AF(4)-ICP-MS for quantitative analysis of NPs in complex food matrices for use in food monitoring and control. The accurate determination of AgNP size distribution remained challenging due to the lack of certified size standards. Copyright © 2015 Elsevier Ltd. All rights reserved.
Direct S -matrix calculation for diffractive structures and metasurfaces
NASA Astrophysics Data System (ADS)
Shcherbakov, Alexey A.; Stebunov, Yury V.; Baidin, Denis F.; Kämpfe, Thomas; Jourlin, Yves
2018-06-01
The paper presents a derivation of analytical components of S matrices for arbitrary planar diffractive structures and metasurfaces in the Fourier domain. The attained general formulas for S -matrix components can be applied within both formulations in the Cartesian and curvilinear metric. A numerical method based on these results can benefit from all previous improvements of the Fourier domain methods. In addition, we provide expressions for S -matrix calculation in the case of periodically corrugated layers of two-dimensional materials, which are valid for arbitrary corrugation depth-to-period ratios. As an example, the derived equations are used to simulate resonant grating excitation of graphene plasmons and the impact of a silica interlayer on corresponding reflection curves.
Iris recognition based on robust principal component analysis
NASA Astrophysics Data System (ADS)
Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong
2014-11-01
Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.
Short-distance matrix elements for $D$-meson mixing for 2+1 lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Chia Cheng
2015-01-01
We study the short-distance hadronic matrix elements for D-meson mixing with partially quenched N f = 2+1 lattice QCD. We use a large set of the MIMD Lattice Computation Collaboration's gauge configurations with a 2 tadpole-improved staggered sea quarks and tadpole-improved Lüscher-Weisz gluons. We use the a 2 tadpole-improved action for valence light quarks and the Sheikoleslami-Wohlert action with the Fermilab interpretation for the valence charm quark. Our calculation covers the complete set of five operators needed to constrain new physics models for D-meson mixing. We match our matrix elements to the MS-NDR scheme evaluated at 3 GeV. We reportmore » values for the Beneke-Buchalla-Greub-Lenz-Nierste choice of evanescent operators.« less
Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang
2017-04-26
This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.
Video based object representation and classification using multiple covariance matrices.
Zhang, Yurong; Liu, Quan
2017-01-01
Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method.
Fast and anisotropic flexibility-rigidity index for protein flexibility and fluctuation analysis
NASA Astrophysics Data System (ADS)
Opron, Kristopher; Xia, Kelin; Wei, Guo-Wei
2014-06-01
Protein structural fluctuation, typically measured by Debye-Waller factors, or B-factors, is a manifestation of protein flexibility, which strongly correlates to protein function. The flexibility-rigidity index (FRI) is a newly proposed method for the construction of atomic rigidity functions required in the theory of continuum elasticity with atomic rigidity, which is a new multiscale formalism for describing excessively large biomolecular systems. The FRI method analyzes protein rigidity and flexibility and is capable of predicting protein B-factors without resorting to matrix diagonalization. A fundamental assumption used in the FRI is that protein structures are uniquely determined by various internal and external interactions, while the protein functions, such as stability and flexibility, are solely determined by the structure. As such, one can predict protein flexibility without resorting to the protein interaction Hamiltonian. Consequently, bypassing the matrix diagonalization, the original FRI has a computational complexity of O(N^2). This work introduces a fast FRI (fFRI) algorithm for the flexibility analysis of large macromolecules. The proposed fFRI further reduces the computational complexity to O(N). Additionally, we propose anisotropic FRI (aFRI) algorithms for the analysis of protein collective dynamics. The aFRI algorithms permit adaptive Hessian matrices, from a completely global 3N × 3N matrix to completely local 3 × 3 matrices. These 3 × 3 matrices, despite being calculated locally, also contain non-local correlation information. Eigenvectors obtained from the proposed aFRI algorithms are able to demonstrate collective motions. Moreover, we investigate the performance of FRI by employing four families of radial basis correlation functions. Both parameter optimized and parameter-free FRI methods are explored. Furthermore, we compare the accuracy and efficiency of FRI with some established approaches to flexibility analysis, namely, normal mode analysis and Gaussian network model (GNM). The accuracy of the FRI method is tested using four sets of proteins, three sets of relatively small-, medium-, and large-sized structures and an extended set of 365 proteins. A fifth set of proteins is used to compare the efficiency of the FRI, fFRI, aFRI, and GNM methods. Intensive validation and comparison indicate that the FRI, particularly the fFRI, is orders of magnitude more efficient and about 10% more accurate overall than some of the most popular methods in the field. The proposed fFRI is able to predict B-factors for α-carbons of the HIV virus capsid (313 236 residues) in less than 30 seconds on a single processor using only one core. Finally, we demonstrate the application of FRI and aFRI to protein domain analysis.
Fast and anisotropic flexibility-rigidity index for protein flexibility and fluctuation analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opron, Kristopher; Xia, Kelin; Wei, Guo-Wei, E-mail: wei@math.msu.edu
Protein structural fluctuation, typically measured by Debye-Waller factors, or B-factors, is a manifestation of protein flexibility, which strongly correlates to protein function. The flexibility-rigidity index (FRI) is a newly proposed method for the construction of atomic rigidity functions required in the theory of continuum elasticity with atomic rigidity, which is a new multiscale formalism for describing excessively large biomolecular systems. The FRI method analyzes protein rigidity and flexibility and is capable of predicting protein B-factors without resorting to matrix diagonalization. A fundamental assumption used in the FRI is that protein structures are uniquely determined by various internal and external interactions,more » while the protein functions, such as stability and flexibility, are solely determined by the structure. As such, one can predict protein flexibility without resorting to the protein interaction Hamiltonian. Consequently, bypassing the matrix diagonalization, the original FRI has a computational complexity of O(N{sup 2}). This work introduces a fast FRI (fFRI) algorithm for the flexibility analysis of large macromolecules. The proposed fFRI further reduces the computational complexity to O(N). Additionally, we propose anisotropic FRI (aFRI) algorithms for the analysis of protein collective dynamics. The aFRI algorithms permit adaptive Hessian matrices, from a completely global 3N × 3N matrix to completely local 3 × 3 matrices. These 3 × 3 matrices, despite being calculated locally, also contain non-local correlation information. Eigenvectors obtained from the proposed aFRI algorithms are able to demonstrate collective motions. Moreover, we investigate the performance of FRI by employing four families of radial basis correlation functions. Both parameter optimized and parameter-free FRI methods are explored. Furthermore, we compare the accuracy and efficiency of FRI with some established approaches to flexibility analysis, namely, normal mode analysis and Gaussian network model (GNM). The accuracy of the FRI method is tested using four sets of proteins, three sets of relatively small-, medium-, and large-sized structures and an extended set of 365 proteins. A fifth set of proteins is used to compare the efficiency of the FRI, fFRI, aFRI, and GNM methods. Intensive validation and comparison indicate that the FRI, particularly the fFRI, is orders of magnitude more efficient and about 10% more accurate overall than some of the most popular methods in the field. The proposed fFRI is able to predict B-factors for α-carbons of the HIV virus capsid (313 236 residues) in less than 30 seconds on a single processor using only one core. Finally, we demonstrate the application of FRI and aFRI to protein domain analysis.« less
Development of a hybrid wave based-transfer matrix model for sound transmission analysis.
Dijckmans, A; Vermeir, G
2013-04-01
In this paper, a hybrid wave based-transfer matrix model is presented that allows for the investigation of the sound transmission through finite multilayered structures placed between two reverberant rooms. The multilayered structure may consist of an arbitrary configuration of fluid, elastic, or poro-elastic layers. The field variables (structural displacements and sound pressures) are expanded in terms of structural and acoustic wave functions. The boundary and continuity conditions in the rooms determine the participation factors in the pressure expansions. The displacement of the multilayered structure is determined by the mechanical impedance matrix, which gives a relation between the pressures and transverse displacements at both sides of the structure. The elements of this matrix are calculated with the transfer matrix method. First, the hybrid model is numerically validated. Next a comparison is made with sound transmission loss measurements of a hollow brick wall and a sandwich panel. Finally, numerical simulations show the influence of structural damping, room dimensions and plate dimensions on the sound transmission loss of multilayered structures.
Lopez-Moreno, Cristina; Perez, Isabel Viera; Urbano, Ana M
2016-03-01
The purpose of this study is to develop the validation of a method for the analysis of certain preservatives in meat and to obtain a suitable Certified Reference Material (CRM) to achieve this task. The preservatives studied were NO3(-), NO2(-) and Cl(-) as they serve as important antimicrobial agents in meat to inhibit the growth of bacteria spoilage. The meat samples were prepared using a treatment that allowed the production of a known CRM concentration that is highly homogeneous and stable in time. The matrix effects were also studied to evaluate the influence on the analytical signal for the ions of interest, showing that the matrix influence does not affect the final result. An assessment of the signal variation in time was carried out for the ions. In this regard, although the chloride and nitrate signal remained stable for the duration of the study, the nitrite signal decreased appreciably with time. A mathematical treatment of the data gave a stable nitrite signal, obtaining a method suitable for the validation of these anions in meat. A statistical study was needed for the validation of the method, where the precision, accuracy, uncertainty and other mathematical parameters were evaluated obtaining satisfactory results. Copyright © 2015 Elsevier Ltd. All rights reserved.
PCAN: Probabilistic Correlation Analysis of Two Non-normal Data Sets
Zoh, Roger S.; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S.; Lampe, Johanna W.; Carroll, Raymond J.
2016-01-01
Summary Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. PMID:27037601
PCAN: Probabilistic correlation analysis of two non-normal data sets.
Zoh, Roger S; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S; Lampe, Johanna W; Carroll, Raymond J
2016-12-01
Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. © 2016, The International Biometric Society.
Correcting for diffusion in carbon-14 dating of ground water
Sanford, W.E.
1997-01-01
It has generally been recognized that molecular diffusion can be a significant process affecting the transport of carbon-14 in the subsurface when occurring either from a permeable aquifer into a confining layer or from a fracture into a rock matrix. An analytical solution that is valid for steady-state radionuclide transport through fractured rock is shown to be applicable to many multilayered aquifer systems. By plotting the ratio of the rate of diffusion to the rate of decay of carbon-14 over the length scales representative of several common hydrogeologic settings, it is demonstrated that diffusion of carbon-14 should often be not only a significant process, but a dominant one relative to decay. An age-correction formula is developed and applied to the Bangkok Basin of Thailand, where a mean carbon-14-based age of 21,000 years was adjusted to 11,000 years to account for diffusion. This formula and its graphical representation should prove useful for many studies, for they can be used first to estimate the potential role of diffusion and then to make a simple first-order age correction if necessary.It has generally been recognized that molecular diffusion can be a significant process affecting the transport of carbon-14 in the subsurface when occurring either from a permeable aquifer into a confining layer or from a fracture into a rock matrix. An analytical solution that is valid for steady-state radionuclide transport through fractured rock is shown to be applicable to many multilayered aquifer systems. By plotting the ratio of the rate of diffusion to the rate of decay of carbon-14 over the length scales representative of several common hydrogeologic settings, it is demonstrated that diffusion of carbon-14 should often be not only a significant process, but a dominant one relative to decay. An age-correction formula is developed and applied to the Bangkok Basin of Thailand, where a mean carbon-14-based age of 21,000 years was adjusted to 11,000 years to account for diffusion. This formula and its graphical representation should prove useful for many studies, for they can be used first to estimate the potential role of diffusion and then to make a simple first-order age correction if necessary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vanroose, W.; Broeckhove, J.; Arickx, F.
The paper proposes a hybrid method for calculating scattering processes. It combines the J-matrix method with exterior complex scaling and an absorbing boundary condition. The wave function is represented as a finite sum of oscillator eigenstates in the inner region, and it is discretized on a grid in the outer region. The method is validated for a one- and a two-dimensional model with partial wave equations and a calculation of p-shell nuclear scattering with semirealistic interactions.
2007-01-01
and a phenolic -resin based polymeric matrix. Such armor panels offer superior protection against fragmented ballistic threats when compared to...database does not contain a material model for the HJ1 composite but provides a model for a Kevlar Fiber Reinforced Polymer (KFRP) containing 53 vol... phenolic resin and epoxy yield stresses and then with a ratio of the S-2 glass and aramid fibers volume fractions. To test the validity of the
Mazilu, I; Mazilu, D A; Melkerson, R E; Hall-Mejia, E; Beck, G J; Nshimyumukiza, S; da Fonseca, Carlos M
2016-03-01
We present exact and approximate results for a class of cooperative sequential adsorption models using matrix theory, mean-field theory, and computer simulations. We validate our models with two customized experiments using ionically self-assembled nanoparticles on glass slides. We also address the limitations of our models and their range of applicability. The exact results obtained using matrix theory can be applied to a variety of two-state systems with cooperative effects.
Density matrix Monte Carlo modeling of quantum cascade lasers
NASA Astrophysics Data System (ADS)
Jirauschek, Christian
2017-10-01
By including elements of the density matrix formalism, the semiclassical ensemble Monte Carlo method for carrier transport is extended to incorporate incoherent tunneling, known to play an important role in quantum cascade lasers (QCLs). In particular, this effect dominates electron transport across thick injection barriers, which are frequently used in terahertz QCL designs. A self-consistent model for quantum mechanical dephasing is implemented, eliminating the need for empirical simulation parameters. Our modeling approach is validated against available experimental data for different types of terahertz QCL designs.
Quantitative nondestructive evaluation of ceramic matrix composite by the resonance method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, T.; Aizawa, T.; Kihara, J.
The resonance method was developed to make quantitative nondestructive evaluation on the mechanical properties without any troublesome procedure. Since the present method is indifferent to the geometry of specimen, both monolithic and ceramic matrix composite materials in process can be evaluated in the nondestructive manner. Al{sub 2}O{sub 3}, Si{sub 3}N{sub 4}, SiC/Si{sub 3}N{sub 4}, and various C/C composite materials are employed to demonstrate the validity and effectiveness of the present method.
Online Feature Transformation Learning for Cross-Domain Object Category Recognition.
Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold
2017-06-09
In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.
Limayem, Alya; Donofrio, Robert Scott; Zhang, Chao; Haller, Edward; Johnson, Michael G
2015-01-01
The multidrug resistant Enterococcus faecium (MEF) strains originating from farm animals are proliferating at a substantial pace to impact downstream food chains and could reach hospitals. This study was conducted to elucidate the drug susceptibility profile of MEF strains collected from poultry products in Ann Arbor, MI area and clinical settings from Michigan State Lab and Moffitt Cancer Center (MCC) in Florida. Presumptive positive Enterococcus isolates at species level were identified by Matrix Assisted Laser Desorption/Ionization Time-of-Flight (MALDI-TOF) analysis. The antibiotic susceptibility profile for both poultry and clinical strains was determined by the Thermo Scientific's Sensititre conform to the National Committee for Clinical Laboratory Standards (NCCLS) and validated via quantitative real-time PCR (qPCR) methods. Out of 50 poultry samples (Turkey: n = 30; Chicken: n = 20), 36 samples were positive for Enterococcus species from which 20.83% were identified as E. faecium. All the E. faecium isolates were multidrug resistant and displayed resistance to the last alternative drug, quinupristin/dalfopristin (QD) used to treat vancomycin resistant E. faecium (VRE) in hospitals. Results indicate the presence of MEF strains in food animals and clinical settings that are also resistant to QD.
Valous, Nektarios A; Mendoza, Fernando; Sun, Da-Wen; Allen, Paul
2010-03-01
The quaternionic singular value decomposition is a technique to decompose a quaternion matrix (representation of a colour image) into quaternion singular vector and singular value component matrices exposing useful properties. The objective of this study was to use a small portion of uncorrelated singular values, as robust features for the classification of sliced pork ham images, using a supervised artificial neural network classifier. Images were acquired from four qualities of sliced cooked pork ham typically consumed in Ireland (90 slices per quality), having similar appearances. Mahalanobis distances and Pearson product moment correlations were used for feature selection. Six highly discriminating features were used as input to train the neural network. An adaptive feedforward multilayer perceptron classifier was employed to obtain a suitable mapping from the input dataset. The overall correct classification performance for the training, validation and test set were 90.3%, 94.4%, and 86.1%, respectively. The results confirm that the classification performance was satisfactory. Extracting the most informative features led to the recognition of a set of different but visually quite similar textural patterns based on quaternionic singular values. Copyright 2009 Elsevier Ltd. All rights reserved.
Rathi, Monika; Ahrenkiel, S P; Carapella, J J; Wanlass, M W
2013-02-01
Given an unknown multicomponent alloy, and a set of standard compounds or alloys of known composition, can one improve upon popular standards-based methods for energy dispersive X-ray (EDX) spectrometry to quantify the elemental composition of the unknown specimen? A method is presented here for determining elemental composition of alloys using transmission electron microscopy-based EDX with appropriate standards. The method begins with a discrete set of related reference standards of known composition, applies multivariate statistical analysis to those spectra, and evaluates the compositions with a linear matrix algebra method to relate the spectra to elemental composition. By using associated standards, only limited assumptions about the physical origins of the EDX spectra are needed. Spectral absorption corrections can be performed by providing an estimate of the foil thickness of one or more reference standards. The technique was applied to III-V multicomponent alloy thin films: composition and foil thickness were determined for various III-V alloys. The results were then validated by comparing with X-ray diffraction and photoluminescence analysis, demonstrating accuracy of approximately 1% in atomic fraction.
Matrix Transformations between Certain Sequence Spaces over the Non-Newtonian Complex Field
Efe, Hakan
2014-01-01
In some cases, the most general linear operator between two sequence spaces is given by an infinite matrix. So the theory of matrix transformations has always been of great interest in the study of sequence spaces. In the present paper, we introduce the matrix transformations in sequence spaces over the field ℂ* and characterize some classes of infinite matrices with respect to the non-Newtonian calculus. Also we give the necessary and sufficient conditions on an infinite matrix transforming one of the classical sets over ℂ* to another one. Furthermore, the concept for sequence-to-sequence and series-to-series methods of summability is given with some illustrated examples. PMID:25110740
Inverter Matrix for the Clementine Mission
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Blaes, B. R.; Tardio, G.; Soli, G. A.
1994-01-01
An inverter matrix test circuit was designed for the Clementine space mission and is built into the RRELAX (Radiation and Reliability Assurance Experiment). The objective is to develop a circuit that will allow the evaluation of the CMOS FETs using a lean data set in the noisy spacecraft environment.
ERIC Educational Resources Information Center
Grunkemeyer, Florence B.
1992-01-01
Discusses the importance of effective listening and problems in the listening process. Presents a matrix evaluating 18 listening inventories on 8 criteria: cost effectiveness, educational use, business use, reliability, validity, adult audience, high school audience, and potential barriers. (JOW)
Ferrero, Alejandro; Campos, Joaquin; Pons, Alicia
2006-04-10
What we believe to be a novel procedure to correct the nonuniformity that is inherent in all matrix detectors has been developed and experimentally validated. This correction method, unlike other nonuniformity-correction algorithms, consists of two steps that separate two of the usual problems that affect characterization of matrix detectors, i.e., nonlinearity and the relative variation of the pixels' responsivity across the array. The correction of the nonlinear behavior remains valid for any illumination wavelength employed, as long as the nonlinearity is not due to power dependence of the internal quantum efficiency. This method of correction of nonuniformity permits the immediate calculation of the correction factor for any given power level and for any illuminant that has a known spectral content once the nonuniform behavior has been characterized for a sufficient number of wavelengths. This procedure has a significant advantage compared with other traditional calibration-based methods, which require that a full characterization be carried out for each spectral distribution pattern of the incident optical radiation. The experimental application of this novel method has achieved a 20-fold increase in the uniformity of a CCD array for response levels close to saturation.
NASA Astrophysics Data System (ADS)
Solivio, Morwena J.; Less, Rebekah; Rynes, Mathew L.; Kramer, Marcus; Aksan, Alptekin
2016-04-01
Despite abundant research conducted on cancer biomarker discovery and validation, to date, less than two-dozen biomarkers have been approved by the FDA for clinical use. One main reason is attributed to inadvertent use of low quality biospecimens in biomarker research. Most proteinaceous biomarkers are extremely susceptible to pre-analytical factors such as collection, processing, and storage. For example, cryogenic storage imposes very harsh chemical, physical, and mechanical stresses on biospecimens, significantly compromising sample quality. In this communication, we report the development of an electrospun lyoprotectant matrix and isothermal vitrification methodology for non-cryogenic stabilization and storage of liquid biospecimens. The lyoprotectant matrix was mainly composed of trehalose and dextran (and various low concentration excipients targeting different mechanisms of damage), and it was engineered to minimize heterogeneity during vitrification. The technology was validated using five biomarkers; LDH, CRP, PSA, MMP-7, and C3a. Complete recovery of LDH, CRP, and PSA levels was achieved post-rehydration while more than 90% recovery was accomplished for MMP-7 and C3a, showing promise for isothermal vitrification as a safe, efficient, and low-cost alternative to cryogenic storage.
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Lindenbach, Jeannette M; Larocque, Sylvie; Lavoie, Anne-Marise; Garceau, Marie-Luce
2012-06-01
ABSTRACTThe hidden nature of older adult mistreatment renders its detection in the domestic setting particularly challenging. A validated screening instrument that can provide a systematic assessment of risk factors can facilitate this detection. One such instrument, the "expanded Indicators of Abuse" tool, has been previously validated in the Hebrew language in a hospital setting. The present study has contributed to the validation of the "e-IOA" in an English-speaking community setting in Ontario, Canada. It consisted of two phases: (a) a content validity review and adaptation of the instrument by experts throughout Ontario, and (b) an inter-rater reliability assessment by home visiting nurses. The adaptation, the "Mistreatment of Older Adult Risk Factors" tool, offers a comprehensive tool for screening in the home setting. This instrument is significant to professional practice as practitioners working with older adults will be better equipped to assess for risk of mistreatment.
Observability under recurrent loss of data
NASA Technical Reports Server (NTRS)
Luck, Rogelio; Ray, Asok; Halevi, Yoram
1992-01-01
An account is given of the concept of extended observability in finite-dimensional linear time-invariant systems under recurrent loss of data, where the state vector has to be reconstructed from an ensemble of sensor data at nonconsecutive samples. An at once necessary and sufficient condition for extended observability that can be expressed via a recursive relation is presented, together with such conditions for this as may be related to the characteristic polynomial of the state transition matrix in a discrete-time setting, or of the system matrix in a continuous-time setting.
Exponentially convergent state estimation for delayed switched recurrent neural networks.
Ahn, Choon Ki
2011-11-01
This paper deals with the delay-dependent exponentially convergent state estimation problem for delayed switched neural networks. A set of delay-dependent criteria is derived under which the resulting estimation error system is exponentially stable. It is shown that the gain matrix of the proposed state estimator is characterised in terms of the solution to a set of linear matrix inequalities (LMIs), which can be checked readily by using some standard numerical packages. An illustrative example is given to demonstrate the effectiveness of the proposed state estimator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugama, Toshifumi
The data set shows performance of economical calcium phosphate cement (Fondu) blended with fly ash, class F (FAF) in carbon steel corrosion protection tests (corrosion rate, corrosion current and potential), bond- and matrix strength, as well as matrix strength recovery after imposed damage at 300C. The corrosion protection and lap-shear bond strength data are given for different Fondu/FAF ratios, the matrix strength data recoveries are reported for 60/40 weight % Fondu/FAF ratios. Effect of sodium phosphate on bond strength, corrosion protection and self-healing is demonstrated.
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor)
1988-01-01
A set of addressable test structures, each of which uses addressing schemes to access individual elements of the structure in a matrix, is used to test the quality of a wafer before integrated circuits produced thereon are diced, packaged and subjected to final testing. The electrical characteristic of each element is checked and compared to the electrical characteristic of all other like elements in the matrix. The effectiveness of the addressable test matrix is in readily analyzing the electrical characteristics of the test elements and in providing diagnostic information.
Assessing the validity of commercial and municipal food environment data sets in Vancouver, Canada.
Daepp, Madeleine Ig; Black, Jennifer
2017-10-01
The present study assessed systematic bias and the effects of data set error on the validity of food environment measures in two municipal and two commercial secondary data sets. Sensitivity, positive predictive value (PPV) and concordance were calculated by comparing two municipal and two commercial secondary data sets with ground-truthed data collected within 800 m buffers surrounding twenty-six schools. Logistic regression examined associations of sensitivity and PPV with commercial density and neighbourhood socio-economic deprivation. Kendall's τ estimated correlations between density and proximity of food outlets near schools constructed with secondary data sets v. ground-truthed data. Vancouver, Canada. Food retailers located within 800 m of twenty-six schools RESULTS: All data sets scored relatively poorly across validity measures, although, overall, municipal data sets had higher levels of validity than did commercial data sets. Food outlets were more likely to be missing from municipal health inspections lists and commercial data sets in neighbourhoods with higher commercial density. Still, both proximity and density measures constructed from all secondary data sets were highly correlated (Kendall's τ>0·70) with measures constructed from ground-truthed data. Despite relatively low levels of validity in all secondary data sets examined, food environment measures constructed from secondary data sets remained highly correlated with ground-truthed data. Findings suggest that secondary data sets can be used to measure the food environment, although estimates should be treated with caution in areas with high commercial density.
Covariance expressions for eigenvalue and eigenvector problems
NASA Astrophysics Data System (ADS)
Liounis, Andrew J.
There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.
Uclés, A; Ulaszewska, M M; Hernando, M D; Ramos, M J; Herrera, S; García, E; Fernández-Alba, A R
2013-07-01
This work introduces a liquid chromatography-electrospray ionization-hybrid quadrupole/time-of-flight mass spectrometry (LC-ESI-QTOF-MS)-based method for qualitative and quantitative analysis of poly(amidoamine) (PAMAM) dendrimers of generations 0 to 3 in an aqueous matrix. The multiple charging of PAMAM dendrimers generated by means of ESI has provided key advantages in dendrimer identification by assignation of charge state through high resolution of isotopic clusters. Isotopic distribution in function of abundance of isotopes (12)C and (13)C yielded valuable and complementarity data for confident characterization. A mass accuracy below 3.8 ppm for the most abundant isotopes (diagnostic ions) provided unambiguous identification of PAMAM dendrimers. Validation of the LC-ESI-QTOF-MS method and matrix effect evaluation enabled reliable and reproducible quantification. The validation parameters, limits of quantification in the range of 0.012 to 1.73 μM, depending on the generation, good linear range (R > 0.996), repeatability (RSD < 13.4%), and reproducibility (RSD < 10.9%) demonstrated the suitability of the method for the quantification of dendrimers in aqueous matrices (water and wastewater). The added selectivity, achieved by multicharge phenomena, represents a clear advantage in screening aqueous mixtures due to the fact that the matrix had no significant effect on ionization, with what is evidenced by an absence of sensitivity loss in most generations of PAMAM dendrimers. Fig Liquid chromatography-electrospray ionization-hybrid quadrupole/time of flight mass spectrometry (LC-ESI-QTOF-MS) based method for qualitative and quantitative analysis of PAMAM dendrimers in aqueous matrix.
Multiprocessor sparse L/U decomposition with controlled fill-in
NASA Technical Reports Server (NTRS)
Alaghband, G.; Jordan, H. F.
1985-01-01
Generation of the maximal compatibles of pivot elements for a class of small sparse matrices is studied. The algorithm involves a binary tree search and has a complexity exponential in the order of the matrix. Different strategies for selection of a set of compatible pivots based on the Markowitz criterion are investigated. The competing issues of parallelism and fill-in generation are studied and results are provided. A technque for obtaining an ordered compatible set directly from the ordered incompatible table is given. This technique generates a set of compatible pivots with the property of generating few fills. A new hueristic algorithm is then proposed that combines the idea of an ordered compatible set with a limited binary tree search to generate several sets of compatible pivots in linear time. Finally, an elimination set to reduce the matrix is selected. Parameters are suggested to obtain a balance between parallelism and fill-ins. Results of applying the proposed algorithms on several large application matrices are presented and analyzed.
The Extracellular Matrix of Fungal Biofilms.
Mitchell, Kaitlin F; Zarnowski, Robert; Andes, David R
A key feature of biofilms is their production of an extracellular matrix. This material covers the biofilm cells, providing a protective barrier to the surrounding environment. During an infection setting, this can include such offenses as host cells and products of the immune system as well as drugs used for treatment. Studies over the past two decades have revealed the matrix from different biofilm species to be as diverse as the microbes themselves. This chapter will review the composition and roles of matrix from fungal biofilms, with primary focus on Candida species, Saccharomyces cerevisiae, Aspergillus fumigatus, and Cryptococcus neoformans. Additional coverage will be provided on the antifungal resistance proffered by the Candida albicans matrix, which has been studied in the most depth. A brief section on the matrix produced by bacterial biofilms will be provided for comparison. Current tools for studying the matrix will also be discussed, as well as suggestions for areas of future study in this field.
Ahmadi, Ali; Thorn, Stephanie L; Alarcon, Emilio I; Kordos, Myra; Padavan, Donna T; Hadizad, Tayebeh; Cron, Greg O; Beanlands, Rob S; DaSilva, Jean N; Ruel, Marc; deKemp, Robert A; Suuronen, Erik J
2015-05-01
Injectable biomaterials have shown promise for cardiac regeneration therapy. However, little is known regarding their retention and distribution upon application in vivo. Matrix imaging would be useful for evaluating these important properties. Herein, hexadecyl-4-[(18)F]fluorobenzoate ((18)F-HFB) and Qdot labeling was used to evaluate collagen matrix delivery in a mouse model of myocardial infarction (MI). At 1 wk post-MI, mice received myocardial injections of (18)F-HFB- or Qdot-labeled matrix to assess its early retention and distribution (at 10 min and 2h) by positron emission tomography (PET), or fluorescence imaging, respectively. PET imaging showed that the bolus of matrix at 10 min redistributed evenly within the ischemic territory by 2h. Ex vivo biodistribution revealed myocardial matrix retention of ∼ 65%, which correlated with PET results, but may be an underestimate since (18)F-HFB matrix labeling efficiency was ∼ 82%. For covalently linked Qdots, labeling efficiency was ∼ 96%. Ex vivo Qdot quantification showed that ∼ 84% of the injected matrix was retained in the myocardium. Serial non-invasive PET imaging and validation by fluorescence imaging confirmed the effectiveness of the collagen matrix to be retained and redistributed within the infarcted myocardium. This study identifies matrix-targeted imaging as a promising modality for assessing the biodistribution of injectable biomaterials for application in the heart. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gray, Dean; LeVanseler, Kerri; Pan, Meide
2008-01-01
A single laboratory validation (SLV) was completed for a method to determine the flavonol aglycones quercetin, kaempferol, and isorhamnetin in Ginkgo biloba products. The method calculates total glycosides based on these aglycones formed following acid hydrolysis. Nine matrixes were chosen for the study, including crude leaf material, standardized dry powder extract, single and multiple entity finished products, and ethanol and glycerol tinctures. For the 9 matrixes evaluated as part of this SLV, the method appeared to be selective and specific, with no observed interferences. The simplified 60 min oven heating hydrolysis procedure was effective for each of the matrixes studied, with no apparent or consistent differences between 60, 75, and 90 min at 90°C. A Youden ruggedness trial testing 7 factors with the potential to affect quantitative results showed that 2 factors (volume hydrolyzed and test sample extraction/hydrolysis weight) were the most important parameters for control during sample preparation. The method performed well in terms of precision, with 4 matrixes tested in triplicate over a 3-day period showing an overall repeatability (relative standard deviation, RSD) of 2.3%. Analysis of variance testing at α = 0.05 showed no significant differences among the within- or between-group sources of variation, although comparisons of within-day (Sw), between-day (Sb), and total (St) precision showed that a majority of the standard deviation came from within-day determinations for all matrixes. Accuracy testing at 2 levels (approximately 30 and 90% of the determined concentrations in standardized dry powder extract) from 2 complex negative control matrixes showed an overall 96% recovery and RSD of 1.0% for the high spike, and 94% recovery and RSD of 2.5% for the low spike. HorRat scores were within the limits for performance acceptability, ranging from 0.4 to 1.3. Based on the performance results presented herein, it is recommended that this method progress to the collaborative laboratory trial. PMID:16001841