Sample records for linear combination analysis

  1. Combined linear theory/impact theory method for analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1980-01-01

    Pressure distributions on a wing body at Mach 4.63 are calculated. The combined theory is shown to give improved predictions over either linear theory or impact theory alone. The combined theory is also applied in the inverse design mode to calculate optimum camber slopes at Mach 4.63. Comparisons with optimum camber slopes obtained from unmodified linear theory show large differences. Analysis of the results indicate that the combined theory correctly predicts the effect of thickness on the loading distributions at high Mach numbers, and that finite thickness wings optimized at high Mach numbers using unmodified linear theory will not achieve the minimum drag characteristics for which they are designed.

  2. Buckling analysis for anisotropic laminated plates under combined inplane loads

    NASA Technical Reports Server (NTRS)

    Viswanathan, A. V.; Tamekuni, M.; Baker, L. L.

    1974-01-01

    The buckling analysis presented considers rectangular flat or curved general laminates subjected to combined inplane normal and shear loads. Linear theory is used in the analysis. All prebuckling deformations and any initial imperfections are ignored. The analysis method can be readily extended to longitudinally stiffened structures subjected to combined inplane normal and shear loads.

  3. Linear combination reading program for capture gamma rays

    USGS Publications Warehouse

    Tanner, Allan B.

    1971-01-01

    This program computes a weighting function, Qj, which gives a scalar output value of unity when applied to the spectrum of a desired element and a minimum value (considering statistics) when applied to spectra of materials not containing the desired element. Intermediate values are obtained for materials containing the desired element, in proportion to the amount of the element they contain. The program is written in the BASIC language in a format specific to the Hewlett-Packard 2000A Time-Sharing System, and is an adaptation of an earlier program for linear combination reading for X-ray fluorescence analysis (Tanner and Brinkerhoff, 1971). Following the program is a sample run from a study of the application of the linear combination technique to capture-gamma-ray analysis for calcium (report in preparation).

  4. [Variable selection methods combined with local linear embedding theory used for optimization of near infrared spectral quantitative models].

    PubMed

    Hao, Yong; Sun, Xu-Dong; Yang, Qiang

    2012-12-01

    Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.

  5. Feasibility of combining linear theory and impact theory methods for the analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1978-01-01

    The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.

  6. A Novel Blast-mitigation Concept for Light Tactical Vehicles

    DTIC Science & Technology

    2013-01-01

    analysis which utilizes the mass and energy (but not linear momentum ) conservation equations is provided. It should be noted that the identical final...results could be obtained using an analogous analysis which combines the mass and the linear momentum conservation equations. For a calorically...governing mass, linear momentum and energy conservation and heat conduction equations are solved within ABAQUS/ Explicit with a second-order accurate

  7. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    NASA Astrophysics Data System (ADS)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  8. Biomotor structures in elite female handball players.

    PubMed

    Katić, Ratko; Cavala, Marijana; Srhoj, Vatromir

    2007-09-01

    In order to identify biomotor structures in elite female handball players, factor structures of morphological characteristics and basic motor abilities of elite female handball players (N = 53) were determined first, followed by determination of relations between the morphological-motor space factors obtained and the set of criterion variables evaluating situation motor abilities in handball. Factor analysis of 14 morphological measures produced three morphological factors, i.e. factor of absolute voluminosity (mesoendomorph), factor of longitudinal skeleton dimensionality, and factor of transverse hand dimensionality. Factor analysis of 15 motor variables yielded five basic motor dimensions, i.e. factor of agility, factor of jumping explosive strength, factor of throwing explosive strength, factor of movement frequency rate, and factor of running explosive strength (sprint). Four significant canonic correlations, i.e. linear combinations, explained the correlation between the set of eight latent variables of the morphological and basic motor space and five variables of situation motoricity. First canonic linear combination is based on the positive effect of the factors of agility/coordination on the ability of fast movement without ball. Second linear combination is based on the effect of jumping explosive strength and transverse hand dimensionality on ball manipulation, throw precision, and speed of movement with ball. Third linear combination is based on the running explosive strength determination by the speed of movement with ball, whereas fourth combination is determined by throwing and jumping explosive strength, and agility on ball pass. The results obtained were consistent with the model of selection in female handball proposed (Srhoj et al., 2006), showing the speed of movement without ball and the ability of ball manipulation to be the predominant specific abilities, as indicated by the first and second linear combination.

  9. A linear circuit analysis program with stiff systems capability

    NASA Technical Reports Server (NTRS)

    Cook, C. H.; Bavuso, S. J.

    1973-01-01

    Several existing network analysis programs have been modified and combined to employ a variable topological approach to circuit translation. Efficient numerical integration techniques are used for transient analysis.

  10. Combined statistical analyses for long-term stability data with multiple storage conditions: a simulation study.

    PubMed

    Almalik, Osama; Nijhuis, Michiel B; van den Heuvel, Edwin R

    2014-01-01

    Shelf-life estimation usually requires that at least three registration batches are tested for stability at multiple storage conditions. The shelf-life estimates are often obtained by linear regression analysis per storage condition, an approach implicitly suggested by ICH guideline Q1E. A linear regression analysis combining all data from multiple storage conditions was recently proposed in the literature when variances are homogeneous across storage conditions. The combined analysis is expected to perform better than the separate analysis per storage condition, since pooling data would lead to an improved estimate of the variation and higher numbers of degrees of freedom, but this is not evident for shelf-life estimation. Indeed, the two approaches treat the observed initial batch results, the intercepts in the model, and poolability of batches differently, which may eliminate or reduce the expected advantage of the combined approach with respect to the separate approach. Therefore, a simulation study was performed to compare the distribution of simulated shelf-life estimates on several characteristics between the two approaches and to quantify the difference in shelf-life estimates. In general, the combined statistical analysis does estimate the true shelf life more consistently and precisely than the analysis per storage condition, but it did not outperform the separate analysis in all circumstances.

  11. Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Gleason, John M.

    1993-01-01

    This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)

  12. Combined slope ratio analysis and linear-subtraction: An extension of the Pearce ratio method

    NASA Astrophysics Data System (ADS)

    De Waal, Sybrand A.

    1996-07-01

    A new technique, called combined slope ratio analysis, has been developed by extending the Pearce element ratio or conserved-denominator method (Pearce, 1968) to its logical conclusions. If two stoichiometric substances are mixed and certain chemical components are uniquely contained in either one of the two mixing substances, then by treating these unique components as conserved, the composition of the substance not containing the relevant component can be accurately calculated within the limits allowed by analytical and geological error. The calculated composition can then be subjected to rigorous statistical testing using the linear-subtraction method recently advanced by Woronow (1994). Application of combined slope ratio analysis to the rocks of the Uwekahuna Laccolith, Hawaii, USA, and the lavas of the 1959-summit eruption of Kilauea Volcano, Hawaii, USA, yields results that are consistent with field observations.

  13. Linear and Order Statistics Combiners for Pattern Classification

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)

    2001-01-01

    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

  14. A computational study on convolutional feature combination strategies for grade classification in colon cancer using fluorescence microscopy data

    NASA Astrophysics Data System (ADS)

    Chowdhury, Aritra; Sevinsky, Christopher J.; Santamaria-Pang, Alberto; Yener, Bülent

    2017-03-01

    The cancer diagnostic workflow is typically performed by highly specialized and trained pathologists, for which analysis is expensive both in terms of time and money. This work focuses on grade classification in colon cancer. The analysis is performed over 3 protein markers; namely E-cadherin, beta actin and colagenIV. In addition, we also use a virtual Hematoxylin and Eosin (HE) stain. This study involves a comparison of various ways in which we can manipulate the information over the 4 different images of the tissue samples and come up with a coherent and unified response based on the data at our disposal. Pre- trained convolutional neural networks (CNNs) is the method of choice for feature extraction. The AlexNet architecture trained on the ImageNet database is used for this purpose. We extract a 4096 dimensional feature vector corresponding to the 6th layer in the network. Linear SVM is used to classify the data. The information from the 4 different images pertaining to a particular tissue sample; are combined using the following techniques: soft voting, hard voting, multiplication, addition, linear combination, concatenation and multi-channel feature extraction. We observe that we obtain better results in general than when we use a linear combination of the feature representations. We use 5-fold cross validation to perform the experiments. The best results are obtained when the various features are linearly combined together resulting in a mean accuracy of 91.27%.

  15. Comparative study on fast classification of brick samples by combination of principal component analysis and linear discriminant analysis using stand-off and table-top laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Vítková, Gabriela; Prokeš, Lubomír; Novotný, Karel; Pořízka, Pavel; Novotný, Jan; Všianský, Dalibor; Čelko, Ladislav; Kaiser, Jozef

    2014-11-01

    Focusing on historical aspect, during archeological excavation or restoration works of buildings or different structures built from bricks it is important to determine, preferably in-situ and in real-time, the locality of bricks origin. Fast classification of bricks on the base of Laser-Induced Breakdown Spectroscopy (LIBS) spectra is possible using multivariate statistical methods. Combination of principal component analysis (PCA) and linear discriminant analysis (LDA) was applied in this case. LIBS was used to classify altogether the 29 brick samples from 7 different localities. Realizing comparative study using two different LIBS setups - stand-off and table-top it is shown that stand-off LIBS has a big potential for archeological in-field measurements.

  16. Proper orthogonal decomposition-based spectral higher-order stochastic estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au; Tinney, Charles E.

    A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimationmore » (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.« less

  17. A system for aerodynamic design and analysis of supersonic aircraft. Part 4: Test cases

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1980-01-01

    An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Representative test cases and associated program output are presented.

  18. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  19. User's manual for interfacing a leading edge, vortex rollup program with two linear panel methods

    NASA Technical Reports Server (NTRS)

    Desilva, B. M. E.; Medan, R. T.

    1979-01-01

    Sufficient instructions are provided for interfacing the Mangler-Smith, leading edge vortex rollup program with a vortex lattice (POTFAN) method and an advanced higher order, singularity linear analysis for computing the vortex effects for simple canard wing combinations.

  20. Numerical analysis method for linear induction machines.

    NASA Technical Reports Server (NTRS)

    Elliott, D. G.

    1972-01-01

    A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.

  1. Carbide-derived carbon (CDC) linear actuator properties in combination with conducting polymers

    NASA Astrophysics Data System (ADS)

    Kiefer, Rudolf; Aydemir, Nihan; Torop, Janno; Kilmartin, Paul A.; Tamm, Tarmo; Kaasik, Friedrich; Kesküla, Arko; Travas-Sejdic, Jadranka; Aabloo, Alvo

    2014-03-01

    Carbide-derived Carbon (CDC) material is applied for super capacitors due to their nanoporous structure and their high charging/discharging capability. In this work we report for the first time CDC linear actuators and CDC combined with polypyrrole (CDC-PPy) in ECMD (Electrochemomechanical deformation) under isotonic (constant force) and isometric (constant length) measurements in aqueous electrolyte. CDC-PPy actuators showing nearly double strain under cyclic voltammetric and square wave potential measurements in comparison to CDC linear actuators. The new material is investigated by SEM (scanning electron microscopy) and EDX (energy dispersive X-ray analysis) to reveal how the conducting polymer layer and the CDC layer interfere together.

  2. Design, Optimization and Evaluation of Integrally Stiffened Al 7050 Panel with Curved Stiffeners

    NASA Technical Reports Server (NTRS)

    Slemp, Wesley C. H.; Bird, R. Keith; Kapania, Rakesh K.; Havens, David; Norris, Ashley; Olliffe, Robert

    2011-01-01

    A curvilinear stiffened panel was designed, manufactured, and tested in the Combined Load Test Fixture at NASA Langley Research Center. The panel was optimized for minimum mass subjected to constraints on buckling load, yielding, and crippling or local stiffener failure using a new analysis tool named EBF3PanelOpt. The panel was designed for a combined compression-shear loading configuration that is a realistic load case for a typical aircraft wing panel. The panel was loaded beyond buckling and strains and out-of-plane displacements were measured. The experimental data were compared with the strains and out-of-plane deflections from a high fidelity nonlinear finite element analysis and linear elastic finite element analysis of the panel/test-fixture assembly. The numerical results indicated that the panel buckled at the linearly elastic buckling eigenvalue predicted for the panel/test-fixture assembly. The experimental strains prior to buckling compared well with both the linear and nonlinear finite element model.

  3. Combined analysis of magnetic and gravity anomalies using normalized source strength (NSS)

    NASA Astrophysics Data System (ADS)

    Li, L.; Wu, Y.

    2017-12-01

    Gravity field and magnetic field belong to potential fields which lead inherent multi-solution. Combined analysis of magnetic and gravity anomalies based on Poisson's relation is used to determinate homology gravity and magnetic anomalies and decrease the ambiguity. The traditional combined analysis uses the linear regression of the reduction to pole (RTP) magnetic anomaly to the first order vertical derivative of the gravity anomaly, and provides the quantitative or semi-quantitative interpretation by calculating the correlation coefficient, slope and intercept. In the calculation process, due to the effect of remanent magnetization, the RTP anomaly still contains the effect of oblique magnetization. In this case the homology gravity and magnetic anomalies display irrelevant results in the linear regression calculation. The normalized source strength (NSS) can be transformed from the magnetic tensor matrix, which is insensitive to the remanence. Here we present a new combined analysis using NSS. Based on the Poisson's relation, the gravity tensor matrix can be transformed into the pseudomagnetic tensor matrix of the direction of geomagnetic field magnetization under the homologous condition. The NSS of pseudomagnetic tensor matrix and original magnetic tensor matrix are calculated and linear regression analysis is carried out. The calculated correlation coefficient, slope and intercept indicate the homology level, Poisson's ratio and the distribution of remanent respectively. We test the approach using synthetic model under complex magnetization, the results show that it can still distinguish the same source under the condition of strong remanence, and establish the Poisson's ratio. Finally, this approach is applied in China. The results demonstrated that our approach is feasible.

  4. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    PubMed

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi-objective design should stimulate its application within the field of (13)C-based metabolic flux analysis. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Analysis of a Linear System for Variable-Thrust Control in the Terminal Phase of Rendezvous

    NASA Technical Reports Server (NTRS)

    Hord, Richard A.; Durling, Barbara J.

    1961-01-01

    A linear system for applying thrust to a ferry vehicle in the 3 terminal phase of rendezvous with a satellite is analyzed. This system requires that the ferry thrust vector per unit mass be variable and equal to a suitable linear combination of the measured position and velocity vectors of the ferry relative to the satellite. The variations of the ferry position, speed, acceleration, and mass ratio are examined for several combinations of the initial conditions and two basic control parameters analogous to the undamped natural frequency and the fraction of critical damping. Upon making a desirable selection of one control parameter and requiring minimum fuel expenditure for given terminal-phase initial conditions, a simplified analysis in one dimension practically fixes the choice of the remaining control parameter. The system can be implemented by an automatic controller or by a pilot.

  6. Employment of CB models for non-linear dynamic analysis

    NASA Technical Reports Server (NTRS)

    Klein, M. R. M.; Deloo, P.; Fournier-Sicre, A.

    1990-01-01

    The non-linear dynamic analysis of large structures is always very time, effort and CPU consuming. Whenever possible the reduction of the size of the mathematical model involved is of main importance to speed up the computational procedures. Such reduction can be performed for the part of the structure which perform linearly. Most of the time, the classical Guyan reduction process is used. For non-linear dynamic process where the non-linearity is present at interfaces between different structures, Craig-Bampton models can provide a very rich information, and allow easy selection of the relevant modes with respect to the phenomenon driving the non-linearity. The paper presents the employment of Craig-Bampton models combined with Newmark direct integration for solving non-linear friction problems appearing at the interface between the Hubble Space Telescope and its solar arrays during in-orbit maneuvers. Theory, implementation in the FEM code ASKA, and practical results are shown.

  7. Displacement analysis of diagnostic ultrasound backscatter: A methodology for characterizing, modeling, and monitoring high intensity focused ultrasound therapy

    PubMed Central

    Speyer, Gavriel; Kaczkowski, Peter J.; Brayman, Andrew A.; Crum, Lawrence A.

    2010-01-01

    Accurate monitoring of high intensity focused ultrasound (HIFU) therapy is critical for widespread clinical use. Pulse-echo diagnostic ultrasound (DU) is known to exhibit temperature sensitivity through relative changes in time-of-flight between two sets of radio frequency (RF) backscatter measurements, one acquired before and one after therapy. These relative displacements, combined with knowledge of the exposure protocol, material properties, heat transfer, and measurement noise statistics, provide a natural framework for estimating the administered heating, and thereby therapy. The proposed method, termed displacement analysis, identifies the relative displacements using linearly independent displacement patterns, or modes, each induced by a particular time-varying heating applied during the exposure interval. These heating modes are themselves linearly independent. This relationship implies that a linear combination of displacement modes aligning the DU measurements is the response to an identical linear combination of heating modes, providing the heating estimate. Furthermore, the accuracy of coefficient estimates in this approximation is determined a priori, characterizing heating, thermal dose, and temperature estimates for any given protocol. Predicted performance is validated using simulations and experiments in alginate gel phantoms. Evidence for a spatially distributed interaction between temperature and time-of-flight changes is presented. PMID:20649206

  8. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  9. A computational system for aerodynamic design and analysis of supersonic aircraft. Part 1: General description and theoretical development

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1976-01-01

    An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Schematics of the program structure and the individual overlays and subroutines are described.

  10. Aerodynamic design and analysis system for supersonic aircraft. Part 1: General description and theoretical development

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1975-01-01

    An integrated system of computer programs has been developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This part presents a general description of the system and describes the theoretical methods used.

  11. Factor Analysis via Components Analysis

    ERIC Educational Resources Information Center

    Bentler, Peter M.; de Leeuw, Jan

    2011-01-01

    When the factor analysis model holds, component loadings are linear combinations of factor loadings, and vice versa. This interrelation permits us to define new optimization criteria and estimation methods for exploratory factor analysis. Although this article is primarily conceptual in nature, an illustrative example and a small simulation show…

  12. Focal activation of primary visual cortex following supra-choroidal electrical stimulation of the retina: Intrinsic signal imaging and linear model analysis.

    PubMed

    Cloherty, Shaun L; Hietanen, Markus A; Suaning, Gregg J; Ibbotson, Michael R

    2010-01-01

    We performed optical intrinsic signal imaging of cat primary visual cortex (Area 17 and 18) while delivering bipolar electrical stimulation to the retina by way of a supra-choroidal electrode array. Using a general linear model (GLM) analysis we identified statistically significant (p < 0.01) activation in a localized region of cortex following supra-threshold electrical stimulation at a single retinal locus. (1) demonstrate that intrinsic signal imaging combined with linear model analysis provides a powerful tool for assessing cortical responses to prosthetic stimulation, and (2) confirm that supra-choroidal electrical stimulation can achieve localized activation of the cortex consistent with focal activation of the retina.

  13. Performance of an Axisymmetric Rocket Based Combined Cycle Engine During Rocket Only Operation Using Linear Regression Analysis

    NASA Technical Reports Server (NTRS)

    Smith, Timothy D.; Steffen, Christopher J., Jr.; Yungster, Shaye; Keller, Dennis J.

    1998-01-01

    The all rocket mode of operation is shown to be a critical factor in the overall performance of a rocket based combined cycle (RBCC) vehicle. An axisymmetric RBCC engine was used to determine specific impulse efficiency values based upon both full flow and gas generator configurations. Design of experiments methodology was used to construct a test matrix and multiple linear regression analysis was used to build parametric models. The main parameters investigated in this study were: rocket chamber pressure, rocket exit area ratio, injected secondary flow, mixer-ejector inlet area, mixer-ejector area ratio, and mixer-ejector length-to-inlet diameter ratio. A perfect gas computational fluid dynamics analysis, using both the Spalart-Allmaras and k-omega turbulence models, was performed with the NPARC code to obtain values of vacuum specific impulse. Results from the multiple linear regression analysis showed that for both the full flow and gas generator configurations increasing mixer-ejector area ratio and rocket area ratio increase performance, while increasing mixer-ejector inlet area ratio and mixer-ejector length-to-diameter ratio decrease performance. Increasing injected secondary flow increased performance for the gas generator analysis, but was not statistically significant for the full flow analysis. Chamber pressure was found to be not statistically significant.

  14. Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Takane, Yoshio

    2004-01-01

    We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…

  15. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  16. Credibility analysis of risk classes by generalized linear model

    NASA Astrophysics Data System (ADS)

    Erdemir, Ovgucan Karadag; Sucu, Meral

    2016-06-01

    In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.

  17. A computational system for aerodynamic design and analysis of supersonic aircraft. Part 2: User's manual

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.; Coleman, R. G.

    1976-01-01

    An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This user's manual contains a description of the system, an explanation of its usage, the input definition, and example output.

  18. Application of third molar development and eruption models in estimating dental age in Malay sub-adults.

    PubMed

    Mohd Yusof, Mohd Yusmiaidil Putera; Cauwels, Rita; Deschepper, Ellen; Martens, Luc

    2015-08-01

    The third molar development (TMD) has been widely utilized as one of the radiographic method for dental age estimation. By using the same radiograph of the same individual, third molar eruption (TME) information can be incorporated to the TMD regression model. This study aims to evaluate the performance of dental age estimation in individual method models and the combined model (TMD and TME) based on the classic regressions of multiple linear and principal component analysis. A sample of 705 digital panoramic radiographs of Malay sub-adults aged between 14.1 and 23.8 years was collected. The techniques described by Gleiser and Hunt (modified by Kohler) and Olze were employed to stage the TMD and TME, respectively. The data was divided to develop three respective models based on the two regressions of multiple linear and principal component analysis. The trained models were then validated on the test sample and the accuracy of age prediction was compared between each model. The coefficient of determination (R²) and root mean square error (RMSE) were calculated. In both genders, adjusted R² yielded an increment in the linear regressions of combined model as compared to the individual models. The overall decrease in RMSE was detected in combined model as compared to TMD (0.03-0.06) and TME (0.2-0.8). In principal component regression, low value of adjusted R(2) and high RMSE except in male were exhibited in combined model. Dental age estimation is better predicted using combined model in multiple linear regression models. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  19. Linear Combination Fitting (LCF)-XANES analysis of As speciation in selected mine-impacted materials

    EPA Pesticide Factsheets

    This table provides sample identification labels and classification of sample type (tailings, calcinated, grey slime). For each sample, total arsenic and iron concentrations determined by acid digestion and ICP analysis are provided along with arsenic in-vitro bioaccessibility (As IVBA) values to estimate arsenic risk. Lastly, the table provides linear combination fitting results from synchrotron XANES analysis showing the distribution of arsenic speciation phases present in each sample along with fitting error (R-factor).This dataset is associated with the following publication:Ollson, C., E. Smith, K. Scheckel, A. Betts, and A. Juhasz. Assessment of arsenic speciation and bioaccessibility in mine-impacted materials. Diana Aga, Wonyong Choi, Andrew Daugulis, Gianluca Li Puma, Gerasimos Lyberatos, and Joo Hwa Tay JOURNAL OF HAZARDOUS MATERIALS. Elsevier Science Ltd, New York, NY, USA, 313: 130-137, (2016).

  20. Combining markers with and without the limit of detection

    PubMed Central

    Dong, Ting; Liu, Catherine Chunling; Petricoin, Emanuel F.; Tang, Liansheng Larry

    2014-01-01

    In this paper, we consider the combination of markers with and without the limit of detection (LOD). LOD is often encountered when measuring proteomic markers. Because of the limited detecting ability of an equipment or instrument, it is difficult to measure markers at a relatively low level. Suppose that after some monotonic transformation, the marker values approximately follow multivariate normal distributions. We propose to estimate distribution parameters while taking the LOD into account, and then combine markers using the results from the linear discriminant analysis. Our simulation results show that the ROC curve parameter estimates generated from the proposed method are much closer to the truth than simply using the linear discriminant analysis to combine markers without considering the LOD. In addition, we propose a procedure to select and combine a subset of markers when many candidate markers are available. The procedure based on the correlation among markers is different from a common understanding that a subset of the most accurate markers should be selected for the combination. The simulation studies show that the accuracy of a combined marker can be largely impacted by the correlation of marker measurements. Our methods are applied to a protein pathway dataset to combine proteomic biomarkers to distinguish cancer patients from non-cancer patients. PMID:24132938

  1. A Closer Look at Charter Schools Using Hierarchical Linear Modeling. NCES 2006-460

    ERIC Educational Resources Information Center

    Braun, Henry; Jenkins, Frank; Grigg, Wendy

    2006-01-01

    Charter schools are a relatively new, but fast-growing, phenomenon in American public education. As such, they merit the attention of all parties interested in the education of the nation's youth. The present report comprises two separate analyses. The first is a "combined analysis" in which hierarchical linear models (HLMs) were…

  2. Exploiting symmetries in the modeling and analysis of tires

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Andersen, Carl M.; Tanner, John A.

    1987-01-01

    A simple and efficient computational strategy for reducing both the size of a tire model and the cost of the analysis of tires in the presence of symmetry-breaking conditions (unsymmetry in the tire material, geometry, or loading) is presented. The strategy is based on approximating the unsymmetric response of the tire with a linear combination of symmetric and antisymmetric global approximation vectors (or modes). Details are presented for the three main elements of the computational strategy, which include: use of special three-field mixed finite-element models, use of operator splitting, and substantial reduction in the number of degrees of freedom. The proposed computational stategy is applied to three quasi-symmetric problems of tires: linear analysis of anisotropic tires, through use of semianalytic finite elements, nonlinear analysis of anisotropic tires through use of two-dimensional shell finite elements, and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry (and their combinations) exhibited by the tire response are identified.

  3. User's manual for GAMNAS: Geometric and Material Nonlinear Analysis of Structures

    NASA Technical Reports Server (NTRS)

    Whitcomb, J. D.; Dattaguru, B.

    1984-01-01

    GAMNAS (Geometric and Material Nonlinear Analysis of Structures) is a two dimensional finite-element stress analysis program. Options include linear, geometric nonlinear, material nonlinear, and combined geometric and material nonlinear analysis. The theory, organization, and use of GAMNAS are described. Required input data and results for several sample problems are included.

  4. Reconstruction of real-space linear matter power spectrum from multipoles of BOSS DR12 results

    NASA Astrophysics Data System (ADS)

    Lee, Seokcheon

    2018-02-01

    Recently, the power spectrum (PS) multipoles using the Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 12 (DR12) sample are analyzed [1]. The based model for the analysis is the so-called TNS quasi-linear model and the analysis provides the multipoles up to the hexadecapole [2]. Thus, one might be able to recover the real-space linear matter PS by using the combinations of multipoles to investigate the cosmology [3]. We provide the analytic form of the ratio of quadrupole (hexadecapole) to monopole moments of the quasi-linear PS including the Fingers-of-God (FoG) effect to recover the real-space PS in the linear regime. One expects that observed values of the ratios of multipoles should be consistent with those of the linear theory at large scales. Thus, we compare the ratios of multipoles of the linear theory, including the FoG effect with the measured values. From these, we recover the linear matter power spectra in real-space. These recovered power spectra are consistent with the linear matter power spectra.

  5. A comparative analysis of alternative approaches for quantifying nonlinear dynamics in cardiovascular system.

    PubMed

    Chen, Yun; Yang, Hui

    2013-01-01

    Heart rate variability (HRV) analysis has emerged as an important research topic to evaluate autonomic cardiac function. However, traditional time and frequency-domain analysis characterizes and quantify only linear and stationary phenomena. In the present investigation, we made a comparative analysis of three alternative approaches (i.e., wavelet multifractal analysis, Lyapunov exponents and multiscale entropy analysis) for quantifying nonlinear dynamics in heart rate time series. Note that these extracted nonlinear features provide information about nonlinear scaling behaviors and the complexity of cardiac systems. To evaluate the performance, we used 24-hour HRV recordings from 54 healthy subjects and 29 heart failure patients, available in PhysioNet. Three nonlinear methods are evaluated not only individually but also in combination using three classification algorithms, i.e., linear discriminate analysis, quadratic discriminate analysis and k-nearest neighbors. Experimental results show that three nonlinear methods capture nonlinear dynamics from different perspectives and the combined feature set achieves the best performance, i.e., sensitivity 97.7% and specificity 91.5%. Collectively, nonlinear HRV features are shown to have the promise to identify the disorders in autonomic cardiovascular function.

  6. Analysis of the Effects of the Commander’s Battle Positioning on Unit Combat Performance

    DTIC Science & Technology

    1991-03-01

    Analysis ......... .. 58 Logistic Regression Analysis ......... .. 61 Canonical Correlation Analysis ........ .. 62 Descriminant Analysis...entails classifying objects into two or more distinct groups, or responses. Dillon defines descriminant analysis as "deriving linear combinations of the...object given it’s predictor variables. The second objective is, through analysis of the parameters of the descriminant functions, determine those

  7. Adaptive convex combination approach for the identification of improper quaternion processes.

    PubMed

    Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P

    2014-01-01

    Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics).

  8. Evaluation and statistical judgement of neural responses to sinusoidal stimulation in cases with superimposed drift and noise.

    PubMed

    Jastreboff, P W

    1979-06-01

    Time histograms of neural responses evoked by sinuosidal stimulation often contain a slow drifting and an irregular noise which disturb Fourier analysis of these responses. Section 2 of this paper evaluates the extent to which a linear drift influences the Fourier analysis, and develops a combined Fourier and linear regression analysis for detecting and correcting for such a linear drift. Usefulness of this correcting method is demonstrated for the time histograms of actual eye movements and Purkinje cell discharges evoked by sinusoidal rotation of rabbits in the horizontal plane. In Sect. 3, the analysis of variance is adopted for estimating the probability of the random occurrence of the response curve extracted by Fourier analysis from noise. This method proved to be useful for avoiding false judgements as to whether the response curve was meaningful, particularly when the response was small relative to the contaminating noise.

  9. Investigation of Periodic Nuclear Decay Data with Spectral Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Javorsek, D.; Sturrock, P.; Buncher, J.; Fischbach, E.; Gruenwald, T.; Hoft, A.; Horan, T.; Jenkins, J.; Kerford, J.; Lee, R.; Mattes, J.; Morris, D.; Mudry, R.; Newport, J.; Petrelli, M.; Silver, M.; Stewart, C.; Terry, B.; Willenberg, H.

    2009-12-01

    We provide the results from a spectral analysis of nuclear decay experiments displaying unexplained periodic fluctuations. The analyzed data was from 56Mn decay reported by the Children's Nutrition Research Center in Houston, 32Si decay reported by an experiment performed at the Brookhaven National Laboratory, and 226Ra decay reported by an experiment performed at the Physikalisch-Technische-Bundesanstalt in Germany. All three data sets possess the same primary frequency mode consisting of an annual period. Additionally a spectral comparison of the local ambient temperature, atmospheric pressure, relative humidity, Earth-Sun distance, and the plasma speed and latitude of the heliospheric current sheet (HCS) was performed. Following analysis of these six possible causal factors, their reciprocals, and their linear combinations, a possible link between nuclear decay rate fluctuations and the linear combination of the HCS latitude and 1/R motivates searching for a possible mechanism with such properties.

  10. The linear combination of vectors implies the existence of the cross and dot products

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2018-07-01

    Given two vectors u and v, their cross product u × v is a vector perpendicular to u and v. The motivation for this property, however, is never addressed. Here we show that the existence of the cross and dot products and the perpendicularity property follow from the concept of linear combination, which does not involve products of vectors. For our proof we consider the plane generated by a linear combination of uand v. When looking for the coefficients in the linear combination required to reach a desired point on the plane, the solution involves the existence of a normal vector n = u × v. Our results have a bearing on the history of vector analysis, as a product similar to the cross product but without the perpendicularity requirement existed at the same time. These competing products originate in the work of two major nineteen-century mathematicians, W. Hamilton, and H. Grassmann. These historical aspects are discussed in some detail here. We also address certain aspects of the teaching of u × v to undergraduate students, which is known to carry some difficulties. This includes the algebraic and geometric denitions of u × v, the rule for the direction of u × v, and the pseudovectorial nature of u × v.

  11. Arrhythmia recognition and classification using combined linear and nonlinear features of ECG signals.

    PubMed

    Elhaj, Fatin A; Salim, Naomie; Harris, Arief R; Swee, Tan Tian; Ahmed, Taqwa

    2016-04-01

    Arrhythmia is a cardiac condition caused by abnormal electrical activity of the heart, and an electrocardiogram (ECG) is the non-invasive method used to detect arrhythmias or heart abnormalities. Due to the presence of noise, the non-stationary nature of the ECG signal (i.e. the changing morphology of the ECG signal with respect to time) and the irregularity of the heartbeat, physicians face difficulties in the diagnosis of arrhythmias. The computer-aided analysis of ECG results assists physicians to detect cardiovascular diseases. The development of many existing arrhythmia systems has depended on the findings from linear experiments on ECG data which achieve high performance on noise-free data. However, nonlinear experiments characterize the ECG signal more effectively sense, extract hidden information in the ECG signal, and achieve good performance under noisy conditions. This paper investigates the representation ability of linear and nonlinear features and proposes a combination of such features in order to improve the classification of ECG data. In this study, five types of beat classes of arrhythmia as recommended by the Association for Advancement of Medical Instrumentation are analyzed: non-ectopic beats (N), supra-ventricular ectopic beats (S), ventricular ectopic beats (V), fusion beats (F) and unclassifiable and paced beats (U). The characterization ability of nonlinear features such as high order statistics and cumulants and nonlinear feature reduction methods such as independent component analysis are combined with linear features, namely, the principal component analysis of discrete wavelet transform coefficients. The features are tested for their ability to differentiate different classes of data using different classifiers, namely, the support vector machine and neural network methods with tenfold cross-validation. Our proposed method is able to classify the N, S, V, F and U arrhythmia classes with high accuracy (98.91%) using a combined support vector machine and radial basis function method. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Multivariate meta-analysis for non-linear and other multi-parameter associations

    PubMed Central

    Gasparrini, A; Armstrong, B; Kenward, M G

    2012-01-01

    In this paper, we formalize the application of multivariate meta-analysis and meta-regression to synthesize estimates of multi-parameter associations obtained from different studies. This modelling approach extends the standard two-stage analysis used to combine results across different sub-groups or populations. The most straightforward application is for the meta-analysis of non-linear relationships, described for example by regression coefficients of splines or other functions, but the methodology easily generalizes to any setting where complex associations are described by multiple correlated parameters. The modelling framework of multivariate meta-analysis is implemented in the package mvmeta within the statistical environment R. As an illustrative example, we propose a two-stage analysis for investigating the non-linear exposure–response relationship between temperature and non-accidental mortality using time-series data from multiple cities. Multivariate meta-analysis represents a useful analytical tool for studying complex associations through a two-stage procedure. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22807043

  13. Analysis of blood pressure signal in patients with different ventricular ejection fraction using linear and non-linear methods.

    PubMed

    Arcentales, Andres; Rivera, Patricio; Caminal, Pere; Voss, Andreas; Bayes-Genis, Antonio; Giraldo, Beatriz F

    2016-08-01

    Changes in the left ventricle function produce alternans in the hemodynamic and electric behavior of the cardiovascular system. A total of 49 cardiomyopathy patients have been studied based on the blood pressure signal (BP), and were classified according to the left ventricular ejection fraction (LVEF) in low risk (LR: LVEF>35%, 17 patients) and high risk (HR: LVEF≤35, 32 patients) groups. We propose to characterize these patients using a linear and a nonlinear methods, based on the spectral estimation and the recurrence plot, respectively. From BP signal, we extracted each systolic time interval (STI), upward systolic slope (BPsl), and the difference between systolic and diastolic BP, defined as pulse pressure (PP). After, the best subset of parameters were obtained through the sequential feature selection (SFS) method. According to the results, the best classification was obtained using a combination of linear and nonlinear features from STI and PP parameters. For STI, the best combination was obtained considering the frequency peak and the diagonal structures of RP, with an area under the curve (AUC) of 79%. The same results were obtained when comparing PP values. Consequently, the use of combined linear and nonlinear parameters could improve the risk stratification of cardiomyopathy patients.

  14. Combined cumulative sum (CUSUM) and chronological environmental analysis as a tool to improve the learning environment for linear-probe endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) trainees: a pilot study.

    PubMed

    Norisue, Yasuhiro; Tokuda, Yasuharu; Juarez, Mayrol; Uchimido, Ryo; Fujitani, Shigeki; Stoeckel, David A

    2017-02-07

    Cumulative sum (CUSUM) analysis can be used to continuously monitor the performance of an individual or process and detect deviations from a preset or standard level of achievement. However, no previous study has evaluated the utility of CUSUM analysis in facilitating timely environmental assessment and interventions to improve performance of linear-probe endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). The aim of this study was to evaluate the usefulness of combined CUSUM and chronological environmental analysis as a tool to improve the learning environment for EBUS-TBNA trainees. This study was an observational chart review. To determine if performance was acceptable, CUSUM analysis was used to track procedural outcomes of trainees in EBUS-TBNA. To investigate chronological changes in the learning environment, multivariate logistic regression analysis was used to compare several indices before and after time points when significant changes occurred in proficiency. Presence of an additional attending bronchoscopist was inversely associated with nonproficiency (odds ratio, 0.117; 95% confidence interval, 0-0.749; P = 0.019). Other factors, including presence of an on-site cytopathologist and dose of sedatives used, were not significantly associated with duration of nonproficiency. Combined CUSUM and chronological environmental analysis may be useful in hastening interventions that improve performance of EBUS-TBNA.

  15. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  16. A FORTRAN program for the analysis of linear continuous and sample-data systems

    NASA Technical Reports Server (NTRS)

    Edwards, J. W.

    1976-01-01

    A FORTRAN digital computer program which performs the general analysis of linearized control systems is described. State variable techniques are used to analyze continuous, discrete, and sampled data systems. Analysis options include the calculation of system eigenvalues, transfer functions, root loci, root contours, frequency responses, power spectra, and transient responses for open- and closed-loop systems. A flexible data input format allows the user to define systems in a variety of representations. Data may be entered by inputing explicit data matrices or matrices constructed in user written subroutines, by specifying transfer function block diagrams, or by using a combination of these methods.

  17. A formulation of rotor-airframe coupling for design analysis of vibrations of helicopter airframes

    NASA Technical Reports Server (NTRS)

    Kvaternik, R. G.; Walton, W. C., Jr.

    1982-01-01

    A linear formulation of rotor airframe coupling intended for vibration analysis in airframe structural design is presented. The airframe is represented by a finite element analysis model; the rotor is represented by a general set of linear differential equations with periodic coefficients; and the connections between the rotor and airframe are specified through general linear equations of constraint. Coupling equations are applied to the rotor and airframe equations to produce one set of linear differential equations governing vibrations of the combined rotor airframe system. These equations are solved by the harmonic balance method for the system steady state vibrations. A feature of the solution process is the representation of the airframe in terms of forced responses calculated at the rotor harmonics of interest. A method based on matrix partitioning is worked out for quick recalculations of vibrations in design studies when only relatively few airframe members are varied. All relations are presented in forms suitable for direct computer implementation.

  18. Identification of Piecewise Linear Uniform Motion Blur

    NASA Astrophysics Data System (ADS)

    Patanukhom, Karn; Nishihara, Akinori

    A motion blur identification scheme is proposed for nonlinear uniform motion blurs approximated by piecewise linear models which consist of more than one linear motion component. The proposed scheme includes three modules that are a motion direction estimator, a motion length estimator and a motion combination selector. In order to identify the motion directions, the proposed scheme is based on a trial restoration by using directional forward ramp motion blurs along different directions and an analysis of directional information via frequency domain by using a Radon transform. Autocorrelation functions of image derivatives along several directions are employed for estimation of the motion lengths. A proper motion combination is identified by analyzing local autocorrelation functions of non-flat component of trial restored results. Experimental examples of simulated and real world blurred images are given to demonstrate a promising performance of the proposed scheme.

  19. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.

    PubMed

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-08-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies. Copyright © 2015 by the Genetics Society of America.

  20. Experimental design and statistical analysis for three-drug combination studies.

    PubMed

    Fang, Hong-Bin; Chen, Xuerong; Pei, Xin-Yan; Grant, Steven; Tan, Ming

    2017-06-01

    Drug combination is a critically important therapeutic approach for complex diseases such as cancer and HIV due to its potential for efficacy at lower, less toxic doses and the need to move new therapies rapidly into clinical trials. One of the key issues is to identify which combinations are additive, synergistic, or antagonistic. While the value of multidrug combinations has been well recognized in the cancer research community, to our best knowledge, all existing experimental studies rely on fixing the dose of one drug to reduce the dimensionality, e.g. looking at pairwise two-drug combinations, a suboptimal design. Hence, there is an urgent need to develop experimental design and analysis methods for studying multidrug combinations directly. Because the complexity of the problem increases exponentially with the number of constituent drugs, there has been little progress in the development of methods for the design and analysis of high-dimensional drug combinations. In fact, contrary to common mathematical reasoning, the case of three-drug combinations is fundamentally more difficult than two-drug combinations. Apparently, finding doses of the combination, number of combinations, and replicates needed to detect departures from additivity depends on dose-response shapes of individual constituent drugs. Thus, different classes of drugs of different dose-response shapes need to be treated as a separate case. Our application and case studies develop dose finding and sample size method for detecting departures from additivity with several common (linear and log-linear) classes of single dose-response curves. Furthermore, utilizing the geometric features of the interaction index, we propose a nonparametric model to estimate the interaction index surface by B-spine approximation and derive its asymptotic properties. Utilizing the method, we designed and analyzed a combination study of three anticancer drugs, PD184, HA14-1, and CEP3891 inhibiting myeloma H929 cell line. To our best knowledge, this is the first ever three drug combinations study performed based on the original 4D dose-response surface formed by dose ranges of three drugs.

  1. Locally linear embedding: dimension reduction of massive protostellar spectra

    NASA Astrophysics Data System (ADS)

    Ward, J. L.; Lumsden, S. L.

    2016-09-01

    We present the results of the application of locally linear embedding (LLE) to reduce the dimensionality of dereddened and continuum subtracted near-infrared spectra using a combination of models and real spectra of massive protostars selected from the Red MSX Source survey data base. A brief comparison is also made with two other dimension reduction techniques; principal component analysis (PCA) and Isomap using the same set of spectra as well as a more advanced form of LLE, Hessian locally linear embedding. We find that whilst LLE certainly has its limitations, it significantly outperforms both PCA and Isomap in classification of spectra based on the presence/absence of emission lines and provides a valuable tool for classification and analysis of large spectral data sets.

  2. Weighted functional linear regression models for gene-based association analysis.

    PubMed

    Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I

    2018-01-01

    Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P < 0.1 in at least one analysis had lower P values with weighted models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.

  3. An analysis of hypercritical states in elastic and inelastic systems

    NASA Astrophysics Data System (ADS)

    Kowalczk, Maciej

    The author raises a wide range of problems whose common characteristic is an analysis of hypercritical states in elastic and inelastic systems. the article consists of two basic parts. The first part primarily discusses problems of modelling hypercritical states, while the second analyzes numerical methods (so-called continuation methods) used to solve non-linear problems. The original approaches for modelling hypercritical states found in this article include the combination of plasticity theory and an energy condition for cracking, accounting for the variability and cyclical nature of the forms of fracture of a brittle material under a die, and the combination of plasticity theory and a simplified description of the phenomenon of localization along a discontinuity line. The author presents analytical solutions of three non-linear problems for systems made of elastic/brittle/plastic and elastic/ideally plastic materials. The author proceeds to discuss the analytical basics of continuation methods and analyzes the significance of the parameterization of non-linear problems, provides a method for selecting control parameters based on an analysis of the rank of a rectangular matrix of a uniform system of increment equations, and also provides a new method for selecting an equilibrium path originating from a bifurcation point. The author provides a general outline of continuation methods based on an analysis of the rank of a matrix of a corrective system of equations. The author supplements his theoretical solutions with numerical solutions of non-linear problems for rod systems and problems of the plastic disintegration of a notched rectangular plastic plate.

  4. Finite element procedures for coupled linear analysis of heat transfer, fluid and solid mechanics

    NASA Technical Reports Server (NTRS)

    Sutjahjo, Edhi; Chamis, Christos C.

    1993-01-01

    Coupled finite element formulations for fluid mechanics, heat transfer, and solid mechanics are derived from the conservation laws for energy, mass, and momentum. To model the physics of interactions among the participating disciplines, the linearized equations are coupled by combining domain and boundary coupling procedures. Iterative numerical solution strategy is presented to solve the equations, with the partitioning of temporal discretization implemented.

  5. Design and experimental validation of Unilateral Linear Halbach magnet arrays for single-sided magnetic resonance.

    PubMed

    Bashyam, Ashvin; Li, Matthew; Cima, Michael J

    2018-07-01

    Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Design and experimental validation of Unilateral Linear Halbach magnet arrays for single-sided magnetic resonance

    NASA Astrophysics Data System (ADS)

    Bashyam, Ashvin; Li, Matthew; Cima, Michael J.

    2018-07-01

    Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR.

  7. Predictive inference for best linear combination of biomarkers subject to limits of detection.

    PubMed

    Coolen-Maturi, Tahani

    2017-08-15

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning and credit scoring. The receiver operating characteristic (ROC) curve is a useful tool to assess the ability of a diagnostic test to discriminate between two classes or groups. In practice, multiple diagnostic tests or biomarkers are combined to improve diagnostic accuracy. Often, biomarker measurements are undetectable either below or above the so-called limits of detection (LoD). In this paper, nonparametric predictive inference (NPI) for best linear combination of two or more biomarkers subject to limits of detection is presented. NPI is a frequentist statistical method that is explicitly aimed at using few modelling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. The NPI lower and upper bounds for the ROC curve subject to limits of detection are derived, where the objective function to maximize is the area under the ROC curve. In addition, the paper discusses the effect of restriction on the linear combination's coefficients on the analysis. Examples are provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. [A novel method of multi-channel feature extraction combining multivariate autoregression and multiple-linear principal component analysis].

    PubMed

    Wang, Jinjia; Zhang, Yanna

    2015-02-01

    Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups of IV-III and IV - I. The experimental results proved that the method proposed in this paper was feasible.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classificationmore » rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.« less

  10. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Analysis of Instantaneous Attractive-Normal Force and Vertical Vibration Control of Combined-Levitation-and-Propulsion SLIM Vehicle

    NASA Astrophysics Data System (ADS)

    Yoshida, Takashi

    Combined-levitation-and-propulsion single-sided linear induction motor (SLIM) vehicle can be levitated without any additional levitation system. When the vehicle runs, the attractive-normal force varies depending on the phase of primary current because of the short primary end effect. The ripple of the attractive-normal force causes the vertical vibration of the vehicle. In this paper, instantaneous attractive-normal force is analyzed by using space harmonic analysis method. And based on the analysis, vertical vibration control is proposed. The validity of the proposed control method is verified by numerical simulation.

  12. Scanning electron microscopy combined with image processing technique: Analysis of microstructure, texture and tenderness in Semitendinous and Gluteus Medius bovine muscles.

    PubMed

    Pieniazek, Facundo; Messina, Valeria

    2016-11-01

    In this study the effect of freeze drying on the microstructure, texture, and tenderness of Semitendinous and Gluteus Medius bovine muscles were analyzed applying Scanning Electron Microscopy combined with image analysis. Samples were analyzed by Scanning Electron Microscopy at different magnifications (250, 500, and 1,000×). Texture parameters were analyzed by Texture analyzer and by image analysis. Tenderness by Warner-Bratzler shear force. Significant differences (p < 0.05) were obtained for image and instrumental texture features. A linear trend with a linear correlation was applied for instrumental and image features. Image texture features calculated from Gray Level Co-occurrence Matrix (homogeneity, contrast, entropy, correlation and energy) at 1,000× in both muscles had high correlations with instrumental features (chewiness, hardness, cohesiveness, and springiness). Tenderness showed a positive correlation in both muscles with image features (energy and homogeneity). Combing Scanning Electron Microscopy with image analysis can be a useful tool to analyze quality parameters in meat.Summary SCANNING 38:727-734, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  13. Data analytics using canonical correlation analysis and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles

    2017-07-01

    A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.

  14. TWO-STAGE FRAGMENTATION FOR CLUSTER FORMATION: ANALYTICAL MODEL AND OBSERVATIONAL CONSIDERATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, Nicole D.; Basu, Shantanu, E-mail: nwityk@uwo.ca, E-mail: basu@uwo.ca

    2012-12-10

    Linear analysis of the formation of protostellar cores in planar magnetic interstellar clouds shows that molecular clouds exhibit a preferred length scale for collapse that depends on the mass-to-flux ratio and neutral-ion collision time within the cloud. We extend this linear analysis to the context of clustered star formation. By combining the results of the linear analysis with a realistic ionization profile for the cloud, we find that a molecular cloud may evolve through two fragmentation events in the evolution toward the formation of stars. Our model suggests that the initial fragmentation into clumps occurs for a transcritical cloud onmore » parsec scales while the second fragmentation can occur for transcritical and supercritical cores on subparsec scales. Comparison of our results with several star-forming regions (Perseus, Taurus, Pipe Nebula) shows support for a two-stage fragmentation model.« less

  15. Composite-Material Point-Stress Analysis

    NASA Technical Reports Server (NTRS)

    Spears, F., S.

    1982-01-01

    PSANAL computes composite-laminate elastic and thermal properties and allowable load levels for any combination of applied membrane and bending loads occurring at a point. Basic linear orthotropic stress/ strain relationships and standard composite-laminate theory formulas are utilized.

  16. The Hindmarsh-Rose neuron model: bifurcation analysis and piecewise-linear approximations.

    PubMed

    Storace, Marco; Linaro, Daniele; de Lange, Enno

    2008-09-01

    This paper provides a global picture of the bifurcation scenario of the Hindmarsh-Rose model. A combination between simulations and numerical continuations is used to unfold the complex bifurcation structure. The bifurcation analysis is carried out by varying two bifurcation parameters and evidence is given that the structure that is found is universal and appears for all combinations of bifurcation parameters. The information about the organizing principles and bifurcation diagrams are then used to compare the dynamics of the model with that of a piecewise-linear approximation, customized for circuit implementation. A good match between the dynamical behaviors of the models is found. These results can be used both to design a circuit implementation of the Hindmarsh-Rose model mimicking the diversity of neural response and as guidelines to predict the behavior of the model as well as its circuit implementation as a function of parameters. (c) 2008 American Institute of Physics.

  17. Algebraic Functions of H-Functions with Specific Dependency Structure.

    DTIC Science & Technology

    1984-05-01

    a study of its characteristic function. Such analysis is reproduced in books by Springer (17), Anderson (23), Feller (34,35), Mood and Graybill (52...following linearity property for expectations of jointly distributed random variables is derived. r 1 Theorem 1.1: If X and Y are real random variables...appear in American Journal of Mathematical and Management Science. 13. Mathai, A.M., and R.K. Saxena, "On linear combinations of stochastic variables

  18. Combining 1D and 2D linear discriminant analysis for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Ji, Hongbing; Wang, Lei; Lin, Lin

    2011-11-01

    In this paper, a novel feature extraction method for palmprint recognition termed as Two-dimensional Combined Discriminant Analysis (2DCDA) is proposed. By connecting the adjacent rows of a image sequentially, the obtained new covariance matrices contain the useful information among local geometry structures in the image, which is eliminated by 2DLDA. In this way, 2DCDA combines LDA and 2DLDA for a promising recognition accuracy, but the number of coefficients of its projection matrix is lower than that of other two-dimensional methods. Experimental results on the CASIA palmprint database demonstrate the effectiveness of the proposed method.

  19. Information Processing Capacity of Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge

    2012-07-01

    Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory.

  20. Analytical optimal pulse shapes obtained with the aid of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co; Arango, Carlos A.; Reyes, Andrés

    2015-09-28

    We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding themore » interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.« less

  1. Information Processing Capacity of Dynamical Systems

    PubMed Central

    Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge

    2012-01-01

    Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory. PMID:22816038

  2. Design, Optimization, and Evaluation of Integrally-Stiffened Al-2139 Panel with Curved Stiffeners

    NASA Technical Reports Server (NTRS)

    Havens, David; Shiyekar, Sandeep; Norris, Ashley; Bird, R. Keith; Kapania, Rakesh K.; Olliffe, Robert

    2011-01-01

    A curvilinear stiffened panel was designed, manufactured, and tested in the Combined Load Test Fixture at NASA Langley Research Center. The panel is representative of a large wing engine pylon rib and was optimized for minimum mass subjected to three combined load cases. The optimization included constraints on web buckling, material yielding, crippling or local stiffener failure, and damage tolerance using a new analysis tool named EBF3PanelOpt. Testing was performed for the critical combined compression-shear loading configuration. The panel was loaded beyond initial buckling, and strains and out-of-plane displacements were extracted from a total of 20 strain gages and 6 linear variable displacement transducers. The VIC-3D system was utilized to obtain full field displacements/strains in the stiffened side of the panel. The experimental data were compared with the strains and out-of-plane deflections from a high fidelity nonlinear finite element analysis. The experimental data were also compared with linear elastic finite element results of the panel/test-fixture assembly. Overall, the panel buckled very near to the predicted load in the web regions.

  3. Study on static and dynamic characteristics of moving magnet linear compressors

    NASA Astrophysics Data System (ADS)

    Chen, N.; Tang, Y. J.; Wu, Y. N.; Chen, X.; Xu, L.

    2007-09-01

    With the development of high-strength NdFeB magnetic material, moving magnet linear compressors have been gradually introduced in the fields of refrigeration and cryogenic engineering, especially in Stirling and pulse tube cryocoolers. This paper presents simulation and experimental investigations on the static and dynamic characteristics of a moving magnet linear motor and a moving magnet linear compressor. Both equivalent magnetic circuits and finite element approaches have been used to model the moving magnet linear motor. Subsequently, the force and equilibrium characteristics of the linear motor have been predicted and verified by detailed static experimental analyses. In combination with a harmonic analysis, experimental investigations were conducted on a prototype of a moving magnet linear compressor. A voltage-stroke relationship, the effect of charging pressure on the performance and dynamic frequency response characteristics are investigated. Finally, the method to identify optimal points of the linear compressor has been described, which is indispensable to the design and operation of moving magnet linear compressors.

  4. Effects of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Dach, R.; Heflin, M. B.; Gross, R. S.; König, R.; Lemoine, F. G.; MacMillan, D. S.; Parker, J. W.; van Dam, T. M.; Wu, X.

    2013-12-01

    The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS global networks used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, the effect of non-tidal atmospheric loading (NTAL) corrections on the TRF is assessed adopting a Remove/Restore approach: (i) Focusing on the a-posteriori approach, the NTAL model derived from the National Center for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations. (ii) Adopting a Kalman-filter based approach, a linear TRF is estimated combining the 4 SG solutions free from NTAL displacements. (iii) Linear fits to the NTAL displacements removed at step (i) are restored to the linear reference frame estimated at (ii). The velocity fields of the (standard) linear reference frame in which the NTAL model has not been removed and the one in which the model has been removed/restored are compared and discussed.

  5. Close Combat Missile Methodology Study

    DTIC Science & Technology

    2010-10-14

    Modeling: Industrial Applications of DEX.” Informatica 23 (1999): 487-491. Bohanec, Marko, Blaz Zupan, and Vladislav Rajkovic. “Applications of...Lisec. “Multi-attribute Decision Analysis in GIS: Weighted Linear Combination and Ordered Weighted Averaging.” Informatica 33, (1999): 459- 474

  6. Analytical and numerical analysis of charge carriers extracted by linearly increasing voltage in a metal-insulator-semiconductor structure relevant to bulk heterojunction organic solar cells

    NASA Astrophysics Data System (ADS)

    Yumnam, Nivedita; Hirwa, Hippolyte; Wagner, Veit

    2017-12-01

    Analysis of charge extraction by linearly increasing voltage is conducted on metal-insulator-semiconductor capacitors in a structure relevant to organic solar cells. For this analysis, an analytical model is developed and is used to determine the conductivity of the active layer. Numerical simulations of the transient current were performed as a way to confirm the applicability of our analytical model and other analytical models existing in the literature. Our analysis is applied to poly(3-hexylthiophene)(P3HT) : phenyl-C61-butyric acid methyl ester (PCBM) which allows to determine the electron and hole mobility independently. A combination of experimental data analysis and numerical simulations reveals the effect of trap states on the transient current and where this contribution is crucial for data analysis.

  7. Development and Integration of an Advanced Stirling Convertor Linear Alternator Model for a Tool Simulating Convertor Performance and Creating Phasor Diagrams

    NASA Technical Reports Server (NTRS)

    Metscher, Jonathan F.; Lewandowski, Edward J.

    2013-01-01

    A simple model of the Advanced Stirling Convertors (ASC) linear alternator and an AC bus controller has been developed and combined with a previously developed thermodynamic model of the convertor for a more complete simulation and analysis of the system performance. The model was developed using Sage, a 1-D thermodynamic modeling program that now includes electro-magnetic components. The convertor, consisting of a free-piston Stirling engine combined with a linear alternator, has sufficiently sinusoidal steady-state behavior to allow for phasor analysis of the forces and voltages acting in the system. A MATLAB graphical user interface (GUI) has been developed to interface with the Sage software for simplified use of the ASC model, calculation of forces, and automated creation of phasor diagrams. The GUI allows the user to vary convertor parameters while fixing different input or output parameters and observe the effect on the phasor diagrams or system performance. The new ASC model and GUI help create a better understanding of the relationship between the electrical component voltages and mechanical forces. This allows better insight into the overall convertor dynamics and performance.

  8. Development of a probabilistic analysis methodology for structural reliability estimation

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.

    1991-01-01

    The novel probabilistic analysis method for assessment of structural reliability presented, which combines fast-convolution with an efficient structural reliability analysis, can after identifying the most important point of a limit state proceed to establish a quadratic-performance function. It then transforms the quadratic function into a linear one, and applies fast convolution. The method is applicable to problems requiring computer-intensive structural analysis. Five illustrative examples of the method's application are given.

  9. A modal aeroelastic analysis scheme for turbomachinery blading. M.S. Thesis - Case Western Reserve Univ. Final Report

    NASA Technical Reports Server (NTRS)

    Smith, Todd E.

    1991-01-01

    An aeroelastic analysis is developed which has general application to all types of axial-flow turbomachinery blades. The approach is based on linear modal analysis, where the blade's dynamic response is represented as a linear combination of contributions from each of its in-vacuum free vibrational modes. A compressible linearized unsteady potential theory is used to model the flow over the oscillating blades. The two-dimensional unsteady flow is evaluated along several stacked axisymmetric strips along the span of the airfoil. The unsteady pressures at the blade surface are integrated to result in the generalized force acting on the blade due to simple harmonic motions. The unsteady aerodynamic forces are coupled to the blade normal modes in the frequency domain using modal analysis. An iterative eigenvalue problem is solved to determine the stability of the blade when the unsteady aerodynamic forces are included in the analysis. The approach is demonstrated by applying it to a high-energy subsonic turbine blade from a rocket engine turbopump power turbine. The results indicate that this turbine could undergo flutter in an edgewise mode of vibration.

  10. Classical Testing in Functional Linear Models.

    PubMed

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.

  11. Classical Testing in Functional Linear Models

    PubMed Central

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155

  12. Discriminant forest classification method and system

    DOEpatents

    Chen, Barry Y.; Hanley, William G.; Lemmond, Tracy D.; Hiller, Lawrence J.; Knapp, David A.; Mugge, Marshall J.

    2012-11-06

    A hybrid machine learning methodology and system for classification that combines classical random forest (RF) methodology with discriminant analysis (DA) techniques to provide enhanced classification capability. A DA technique which uses feature measurements of an object to predict its class membership, such as linear discriminant analysis (LDA) or Andersen-Bahadur linear discriminant technique (AB), is used to split the data at each node in each of its classification trees to train and grow the trees and the forest. When training is finished, a set of n DA-based decision trees of a discriminant forest is produced for use in predicting the classification of new samples of unknown class.

  13. Revision of the Malagasy Camponotus edmondi species group (Hymenoptera, Formicidae, Formicinae): integrating qualitative morphology and multivariate morphometric analysis.

    PubMed

    Rakotonirina, Jean Claude; Csősz, Sándor; Fisher, Brian L

    2016-01-01

    The Malagasy Camponotus edmondi species group is revised based on both qualitative morphological traits and multivariate analysis of continuous morphometric data. To minimize the effect of the scaling properties of diverse traits due to worker caste polymorphism, and to achieve the desired near-linearity of data, morphometric analyses were done only on minor workers. The majority of traits exhibit broken scaling on head size, dividing Camponotus workers into two discrete subcastes, minors and majors. This broken scaling prevents the application of algorithms that uses linear combination of data to the entire dataset, hence only minor workers were analyzed statistically. The elimination of major workers resulted in linearity and the data meet required assumptions. However, morphometric ratios for the subsets of minor and major workers were used in species descriptions and redefinitions. Prior species hypotheses and the goodness of clusters were tested on raw data by confirmatory linear discriminant analysis. Due to the small sample size available for some species, a factor known to reduce statistical reliability, hypotheses generated by exploratory analyses were tested with extreme care and species delimitations were inferred via the combined evidence of both qualitative (morphology and biology) and quantitative data. Altogether, fifteen species are recognized, of which 11 are new to science: Camponotus alamaina sp. n. , Camponotus androy sp. n. , Camponotus bevohitra sp. n. , Camponotus galoko sp. n. , Camponotus matsilo sp. n. , Camponotus mifaka sp. n. , Camponotus orombe sp. n. , Camponotus tafo sp. n. , Camponotus tratra sp. n. , Camponotus varatra sp. n. , and Camponotus zavo sp. n. Four species are redescribed: Camponotus echinoploides Forel, Camponotus edmondi André, Camponotus ethicus Forel, and Camponotus robustus Roger. Camponotus edmondi ernesti Forel, syn. n. is synonymized under Camponotus edmondi . This revision also includes an identification key to species for both minor and major castes, information on geographic distribution and biology, taxonomic discussions, and descriptions of intraspecific variation. Traditional taxonomy and multivariate morphometric analysis are independent sources of information which, in combination, allow more precise species delimitation. Moreover, quantitative characters included in identification keys improve accuracy of determination in difficult cases.

  14. Revision of the Malagasy Camponotus edmondi species group (Hymenoptera, Formicidae, Formicinae): integrating qualitative morphology and multivariate morphometric analysis

    PubMed Central

    Rakotonirina, Jean Claude; Csősz, Sándor; Fisher, Brian L.

    2016-01-01

    Abstract The Malagasy Camponotus edmondi species group is revised based on both qualitative morphological traits and multivariate analysis of continuous morphometric data. To minimize the effect of the scaling properties of diverse traits due to worker caste polymorphism, and to achieve the desired near-linearity of data, morphometric analyses were done only on minor workers. The majority of traits exhibit broken scaling on head size, dividing Camponotus workers into two discrete subcastes, minors and majors. This broken scaling prevents the application of algorithms that uses linear combination of data to the entire dataset, hence only minor workers were analyzed statistically. The elimination of major workers resulted in linearity and the data meet required assumptions. However, morphometric ratios for the subsets of minor and major workers were used in species descriptions and redefinitions. Prior species hypotheses and the goodness of clusters were tested on raw data by confirmatory linear discriminant analysis. Due to the small sample size available for some species, a factor known to reduce statistical reliability, hypotheses generated by exploratory analyses were tested with extreme care and species delimitations were inferred via the combined evidence of both qualitative (morphology and biology) and quantitative data. Altogether, fifteen species are recognized, of which 11 are new to science: Camponotus alamaina sp. n., Camponotus androy sp. n., Camponotus bevohitra sp. n., Camponotus galoko sp. n., Camponotus matsilo sp. n., Camponotus mifaka sp. n., Camponotus orombe sp. n., Camponotus tafo sp. n., Camponotus tratra sp. n., Camponotus varatra sp. n., and Camponotus zavo sp. n. Four species are redescribed: Camponotus echinoploides Forel, Camponotus edmondi André, Camponotus ethicus Forel, and Camponotus robustus Roger. Camponotus edmondi ernesti Forel, syn. n. is synonymized under Camponotus edmondi. This revision also includes an identification key to species for both minor and major castes, information on geographic distribution and biology, taxonomic discussions, and descriptions of intraspecific variation. Traditional taxonomy and multivariate morphometric analysis are independent sources of information which, in combination, allow more precise species delimitation. Moreover, quantitative characters included in identification keys improve accuracy of determination in difficult cases. PMID:28050160

  15. Integrated approach to multimodal media content analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-12-01

    In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.

  16. Chaos as an intermittently forced linear system.

    PubMed

    Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kaiser, Eurika; Kutz, J Nathan

    2017-05-30

    Understanding the interplay of order and disorder in chaos is a central challenge in modern quantitative science. Approximate linear representations of nonlinear dynamics have long been sought, driving considerable interest in Koopman theory. We present a universal, data-driven decomposition of chaos as an intermittently forced linear system. This work combines delay embedding and Koopman theory to decompose chaotic dynamics into a linear model in the leading delay coordinates with forcing by low-energy delay coordinates; this is called the Hankel alternative view of Koopman (HAVOK) analysis. This analysis is applied to the Lorenz system and real-world examples including Earth's magnetic field reversal and measles outbreaks. In each case, forcing statistics are non-Gaussian, with long tails corresponding to rare intermittent forcing that precedes switching and bursting phenomena. The forcing activity demarcates coherent phase space regions where the dynamics are approximately linear from those that are strongly nonlinear.The huge amount of data generated in fields like neuroscience or finance calls for effective strategies that mine data to reveal underlying dynamics. Here Brunton et al.develop a data-driven technique to analyze chaotic systems and predict their dynamics in terms of a forced linear model.

  17. Linear stability analysis of scramjet unstart

    NASA Astrophysics Data System (ADS)

    Jang, Ik; Nichols, Joseph; Moin, Parviz

    2015-11-01

    We investigate the bifurcation structure of unstart and restart events in a dual-mode scramjet using the Reynolds-averaged Navier-Stokes equations. The scramjet of interest (HyShot II, Laurence et al., AIAA2011-2310) operates at a free-stream Mach number of approximately 8, and the length of the combustor chamber is 300mm. A heat-release model is applied to mimic the combustion process. Pseudo-arclength continuation with Newton-Raphson iteration is used to calculate multiple solution branches. Stability analysis based on linearized dynamics about the solution curves reveals a metric that optimally forewarns unstart. By combining direct and adjoint eigenmodes, structural sensitivity analysis suggests strategies for unstart mitigation, including changing the isolator length. This work is supported by DOE/NNSA and AFOSR.

  18. Analytical and numerical study of the electro-osmotic annular flow of viscoelastic fluids.

    PubMed

    Ferrás, L L; Afonso, A M; Alves, M A; Nóbrega, J M; Pinho, F T

    2014-04-15

    In this work we present semi-analytical solutions for the electro-osmotic annular flow of viscoelastic fluids modeled by the Linear and Exponential PTT models. The viscoelastic fluid flows in the axial direction between two concentric cylinders under the combined influences of electrokinetic and pressure forcings. The analysis invokes the Debye-Hückel approximation and includes the limit case of pure electro-osmotic flow. The solution is valid for both no slip and slip velocity at the walls and the chosen slip boundary condition is the linear Navier slip velocity model. The combined effects of fluid rheology, electro-osmotic and pressure gradient forcings on the fluid velocity distribution are also discussed. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Stochastic modeling of macrodispersion in unsaturated heterogeneous porous media. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, T.C.J.

    1995-02-01

    Spatial heterogeneity of geologic media leads to uncertainty in predicting both flow and transport in the vadose zone. In this work an efficient and flexible, combined analytical-numerical Monte Carlo approach is developed for the analysis of steady-state flow and transient transport processes in highly heterogeneous, variably saturated porous media. The approach is also used for the investigation of the validity of linear, first order analytical stochastic models. With the Monte Carlo analysis accurate estimates of the ensemble conductivity, head, velocity, and concentration mean and covariance are obtained; the statistical moments describing displacement of solute plumes, solute breakthrough at a compliancemore » surface, and time of first exceedance of a given solute flux level are analyzed; and the cumulative probability density functions for solute flux across a compliance surface are investigated. The results of the Monte Carlo analysis show that for very heterogeneous flow fields, and particularly in anisotropic soils, the linearized, analytical predictions of soil water tension and soil moisture flux become erroneous. Analytical, linearized Lagrangian transport models also overestimate both the longitudinal and the transverse spreading of the mean solute plume in very heterogeneous soils and in dry soils. A combined analytical-numerical conditional simulation algorithm is also developed to estimate the impact of in-situ soil hydraulic measurements on reducing the uncertainty of concentration and solute flux predictions.« less

  20. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    PubMed Central

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  1. Preliminary Analysis of an Oscillating Surge Wave Energy Converter with Controlled Geometry: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, Nathan; Lawson, Michael; Yu, Yi-Hsiang

    The aim of this paper is to present a novel wave energy converter device concept that is being developed at the National Renewable Energy Laboratory. The proposed concept combines an oscillating surge wave energy converter with active control surfaces. These active control surfaces allow for the device geometry to be altered, which leads to changes in the hydrodynamic properties. The device geometry will be controlled on a sea state time scale and combined with wave-to-wave power-take-off control to maximize power capture, increase capacity factor, and reduce design loads. The paper begins with a traditional linear frequency domain analysis of themore » device performance. Performance sensitivity to foil pitch angle, the number of activated foils, and foil cross section geometry is presented to illustrate the current design decisions; however, it is understood from previous studies that modeling of current oscillating wave energy converter designs requires the consideration of nonlinear hydrodynamics and viscous drag forces. In response, a nonlinear model is presented that highlights the shortcomings of the linear frequency domain analysis and increases the precision in predicted performance.« less

  2. Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops

    NASA Technical Reports Server (NTRS)

    Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram

    2017-01-01

    The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.

  3. Synchrotron speciation data for zero-valent iron nanoparticles

    EPA Pesticide Factsheets

    This data set encompasses a complete analysis of synchrotron speciation data for 5 iron nanoparticle samples (P1, P2, P3, S1, S2, and metallic iron) to include linear combination fitting results (Table 6 and Figure 9) and ab-initio extended x-ray absorption fine structure spectroscopy fitting (Figure 10 and Table 7).Table 6: Linear combination fitting of the XAS data for the 5 commercial nZVI/ZVI products tested. Species proportions are presented as percentages. Goodness of fit is indicated by the chi^2 value.Figure 9: Normalised Fe K-edge k3-weighted EXAFS of the 5 commercial nZVI/ZVIproducts tested. Dotted lines show the best 4-component linear combination fit ofreference spectra.Figure 10: Fourier transformed radial distribution functions (RDFs) of the five samplesand an iron metal foil. The black lines in Fig. 10 represent the sample data and the reddotted curves represent the non-linear fitting results of the EXAFS data.Table 7: Coordination parameters of Fe in the samples.This dataset is associated with the following publication:Chekli, L., B. Bayatsarmadi, R. Sekine, B. Sarkar, A. Maoz Shen, K. Scheckel , W. Skinner, R. Naidu, H. Shon, E. Lombi, and E. Donner. Analytical Characterisation of Nanoscale Zero-Valent Iron: A Methodological Review. Richard P. Baldwin ANALYTICA CHIMICA ACTA. Elsevier Science Ltd, New York, NY, USA, 903: 13-35, (2016).

  4. Liquid contrabands classification based on energy dispersive X-ray diffraction and hybrid discriminant analysis

    NASA Astrophysics Data System (ADS)

    YangDai, Tianyi; Zhang, Li

    2016-02-01

    Energy dispersive X-ray diffraction (EDXRD) combined with hybrid discriminant analysis (HDA) has been utilized for classifying the liquid materials for the first time. The XRD spectra of 37 kinds of liquid contrabands and daily supplies were obtained using an EDXRD test bed facility. The unique spectra of different samples reveal XRD's capability to distinguish liquid contrabands from daily supplies. In order to create a system to detect liquid contrabands, the diffraction spectra were subjected to HDA which is the combination of principal components analysis (PCA) and linear discriminant analysis (LDA). Experiments based on the leave-one-out method demonstrate that HDA is a practical method with higher classification accuracy and lower noise sensitivity than the other methods in this application. The study shows the great capability and potential of the combination of XRD and HDA for liquid contrabands classification.

  5. Combination of Thin Lenses--A Computer Oriented Method.

    ERIC Educational Resources Information Center

    Flerackers, E. L. M.; And Others

    1984-01-01

    Suggests a method treating geometric optics using a microcomputer to do the calculations of image formation. Calculations are based on the connection between the composition of lenses and the mathematics of fractional linear equations. Logic of the analysis and an example problem are included. (JM)

  6. Sex differences in the fetal heart rate variability indices of twins.

    PubMed

    Tendais, Iva; Figueiredo, Bárbara; Gonçalves, Hernâni; Bernardes, João; Ayres-de-Campos, Diogo; Montenegro, Nuno

    2015-03-01

    To evaluate the differences in linear and complex heart rate dynamics in twin pairs according to fetal sex combination [male-female (MF), male-male (MM), and female-female (FF)]. Fourteen twin pairs (6 MF, 3 MM, and 5 FF) were monitored between 31 and 36.4 weeks of gestation. Twenty-six fetal heart rate (FHR) recordings of both twins were simultaneously acquired and analyzed with a system for computerized analysis of cardiotocograms. Linear and nonlinear FHR indices were calculated. Overall, MM twins presented higher intrapair average in linear indices than the other pairs, whereas FF twins showed higher sympathetic-vagal balance. MF twins exhibited higher intrapair average in entropy indices and MM twins presented lower entropy values than FF twins considering the (automatically selected) threshold rLu. MM twin pairs showed higher intrapair differences in linear heart rate indices than MF and FF twins, whereas FF twins exhibited lower intrapair differences in entropy indices. The results of this exploratory study suggest that twins have sex-specific differences in linear and nonlinear indices of FHR. MM twins expressed signs of a more active autonomic nervous system and MF twins showed the most active complexity control system. These results suggest that fetal sex combination should be taken into consideration when performing detailed evaluation of the FHR in twins.

  7. Time Series Analysis and Forecasting of Wastewater Inflow into Bandar Tun Razak Sewage Treatment Plant in Selangor, Malaysia

    NASA Astrophysics Data System (ADS)

    Abunama, Taher; Othman, Faridah

    2017-06-01

    Analysing the fluctuations of wastewater inflow rates in sewage treatment plants (STPs) is essential to guarantee a sufficient treatment of wastewater before discharging it to the environment. The main objectives of this study are to statistically analyze and forecast the wastewater inflow rates into the Bandar Tun Razak STP in Kuala Lumpur, Malaysia. A time series analysis of three years’ weekly influent data (156weeks) has been conducted using the Auto-Regressive Integrated Moving Average (ARIMA) model. Various combinations of ARIMA orders (p, d, q) have been tried to select the most fitted model, which was utilized to forecast the wastewater inflow rates. The linear regression analysis was applied to testify the correlation between the observed and predicted influents. ARIMA (3, 1, 3) model was selected with the highest significance R-square and lowest normalized Bayesian Information Criterion (BIC) value, and accordingly the wastewater inflow rates were forecasted to additional 52weeks. The linear regression analysis between the observed and predicted values of the wastewater inflow rates showed a positive linear correlation with a coefficient of 0.831.

  8. Tracing and separating plasma components causing matrix effects in hydrophilic interaction chromatography-electrospray ionization mass spectrometry.

    PubMed

    Ekdahl, Anja; Johansson, Maria C; Ahnoff, Martin

    2013-04-01

    Matrix effects on electrospray ionization were investigated for plasma samples analysed by hydrophilic interaction chromatography (HILIC) in gradient elution mode, and HILIC columns of different chemistries were tested for separation of plasma components and model analytes. By combining mass spectral data with post-column infusion traces, the following components of protein-precipitated plasma were identified and found to have significant effect on ionization: urea, creatinine, phosphocholine, lysophosphocholine, sphingomyelin, sodium ion, chloride ion, choline and proline betaine. The observed effect on ionization was both matrix-component and analyte dependent. The separation of identified plasma components and model analytes on eight columns was compared, using pair-wise linear correlation analysis and principal component analysis (PCA). Large changes in selectivity could be obtained by change of column, while smaller changes were seen when the mobile phase buffer was changed from ammonium formate pH 3.0 to ammonium acetate pH 4.5. While results from PCA and linear correlation analysis were largely in accord, linear correlation analysis was judged to be more straight-forward in terms of conduction and interpretation.

  9. Dynamic System Coupler Program (DYSCO 4.1). Volume 1. Theoretical Manual

    DTIC Science & Technology

    1989-01-01

    present analysis is as follows: 1. Triplet X, Y, Z represents an inertia frame, R. The R system coordinates are the rotor shaft axes when there is...small perturbation analysis . 2.5 3-D MODAL STRUCTURE - CFM3 A three-dimensional structure is represented as a linear combination of orth­ ogonal modes...Include rotor blade damage modeling, Elgen analysis development, general time history solution development, frequency domain solution development

  10. Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve

    PubMed Central

    Fong, Youyi; Yin, Shuxin; Huang, Ying

    2016-01-01

    In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981

  11. The Raman spectrum character of skin tumor induced by UVB

    NASA Astrophysics Data System (ADS)

    Wu, Shulian; Hu, Liangjun; Wang, Yunxia; Li, Yongzeng

    2016-03-01

    In our study, the skin canceration processes induced by UVB were analyzed from the perspective of tissue spectrum. A home-made Raman spectral system with a millimeter order excitation laser spot size combined with a multivariate statistical analysis for monitoring the skin changed irradiated by UVB was studied and the discrimination were evaluated. Raman scattering signals of the SCC and normal skin were acquired. Spectral differences in Raman spectra were revealed. Linear discriminant analysis (LDA) based on principal component analysis (PCA) were employed to generate diagnostic algorithms for the classification of skin SCC and normal. The results indicated that Raman spectroscopy combined with PCA-LDA demonstrated good potential for improving the diagnosis of skin cancers.

  12. Meshless analysis of shear deformable shells: the linear model

    NASA Astrophysics Data System (ADS)

    Costa, Jorge C.; Tiago, Carlos M.; Pimenta, Paulo M.

    2013-10-01

    This work develops a kinematically linear shell model departing from a consistent nonlinear theory. The initial geometry is mapped from a flat reference configuration by a stress-free finite deformation, after which, the actual shell motion takes place. The model maintains the features of a complete stress-resultant theory with Reissner-Mindlin kinematics based on an inextensible director. A hybrid displacement variational formulation is presented, where the domain displacements and kinematic boundary reactions are independently approximated. The resort to a flat reference configuration allows the discretization using 2-D Multiple Fixed Least-Squares (MFLS) on the domain. The consistent definition of stress resultants and consequent plane stress assumption led to a neat formulation for the analysis of shells. The consistent linear approximation, combined with MFLS, made possible efficient computations with a desired continuity degree, leading to smooth results for the displacement, strain and stress fields, as shown by several numerical examples.

  13. Analysis of facial motion patterns during speech using a matrix factorization algorithm

    PubMed Central

    Lucero, Jorge C.; Munhall, Kevin G.

    2008-01-01

    This paper presents an analysis of facial motion during speech to identify linearly independent kinematic regions. The data consists of three-dimensional displacement records of a set of markers located on a subject’s face while producing speech. A QR factorization with column pivoting algorithm selects a subset of markers with independent motion patterns. The subset is used as a basis to fit the motion of the other facial markers, which determines facial regions of influence of each of the linearly independent markers. Those regions constitute kinematic “eigenregions” whose combined motion produces the total motion of the face. Facial animations may be generated by driving the independent markers with collected displacement records. PMID:19062866

  14. Stability and time-domain analysis of the dispersive tristability in microresonators under modal coupling

    NASA Astrophysics Data System (ADS)

    Dumeige, Yannick; Féron, Patrice

    2011-10-01

    Coupled nonlinear resonators have potential applications for the integration of multistable photonic devices. The dynamic properties of two coupled-mode nonlinear microcavities made of Kerr material are studied by linear stability analysis. Using a suitable combination of the modal coupling rate and the frequency detuning, it is possible to obtain configurations where a hysteresis loop is included inside other bistable cycles. We show that a single resonator with two modes both linearly and nonlinearly coupled via the cross-Kerr effect can have a multistable behavior. This could be implemented in semiconductor nonlinear whispering-gallery-mode microresonators under modal coupling for all optical signal processing or ternary optical logic applications.

  15. Meta-analysis of thirty-two case-control and two ecological radon studies of lung cancer.

    PubMed

    Dobrzynski, Ludwik; Fornalski, Krzysztof W; Reszczynska, Joanna

    2018-03-01

    A re-analysis has been carried out of thirty-two case-control and two ecological studies concerning the influence of radon, a radioactive gas, on the risk of lung cancer. Three mathematically simplest dose-response relationships (models) were tested: constant (zero health effect), linear, and parabolic (linear-quadratic). Health effect end-points reported in the analysed studies are odds ratios or relative risk ratios, related either to morbidity or mortality. In our preliminary analysis, we show that the results of dose-response fitting are qualitatively (within uncertainties, given as error bars) the same, whichever of these health effect end-points are applied. Therefore, we deemed it reasonable to aggregate all response data into the so-called Relative Health Factor and jointly analysed such mixed data, to obtain better statistical power. In the second part of our analysis, robust Bayesian and classical methods of analysis were applied to this combined dataset. In this part of our analysis, we selected different subranges of radon concentrations. In view of substantial differences between the methodology used by the authors of case-control and ecological studies, the mathematical relationships (models) were applied mainly to the thirty-two case-control studies. The degree to which the two ecological studies, analysed separately, affect the overall results when combined with the thirty-two case-control studies, has also been evaluated. In all, as a result of our meta-analysis of the combined cohort, we conclude that the analysed data concerning radon concentrations below ~1000 Bq/m3 (~20 mSv/year of effective dose to the whole body) do not support the thesis that radon may be a cause of any statistically significant increase in lung cancer incidence.

  16. Non-Linear Concentration-Response Relationships between Ambient Ozone and Daily Mortality.

    PubMed

    Bae, Sanghyuk; Lim, Youn-Hee; Kashima, Saori; Yorifuji, Takashi; Honda, Yasushi; Kim, Ho; Hong, Yun-Chul

    2015-01-01

    Ambient ozone (O3) concentration has been reported to be significantly associated with mortality. However, linearity of the relationships and the presence of a threshold has been controversial. The aim of the present study was to examine the concentration-response relationship and threshold of the association between ambient O3 concentration and non-accidental mortality in 13 Japanese and Korean cities from 2000 to 2009. We selected Japanese and Korean cities which have population of over 1 million. We constructed Poisson regression models adjusting daily mean temperature, daily mean PM10, humidity, time trend, season, year, day of the week, holidays and yearly population. The association between O3 concentration and mortality was examined using linear, spline and linear-threshold models. The thresholds were estimated for each city, by constructing linear-threshold models. We also examined the city-combined association using a generalized additive mixed model. The mean O3 concentration did not differ greatly between Korea and Japan, which were 26.2 ppb and 24.2 ppb, respectively. Seven out of 13 cities showed better fits for the spline model compared with the linear model, supporting a non-linear relationships between O3 concentration and mortality. All of the 7 cities showed J or U shaped associations suggesting the existence of thresholds. The range of city-specific thresholds was from 11 to 34 ppb. The city-combined analysis also showed a non-linear association with a threshold around 30-40 ppb. We have observed non-linear concentration-response relationship with thresholds between daily mean ambient O3 concentration and daily number of non-accidental death in Japanese and Korean cities.

  17. Identification of Variables Associated with Group Separation in Descriptive Discriminant Analysis: Comparison of Methods for Interpreting Structure Coefficients

    ERIC Educational Resources Information Center

    Finch, Holmes

    2010-01-01

    Discriminant Analysis (DA) is a tool commonly used for differentiating among 2 or more groups based on 2 or more predictor variables. DA works by finding 1 or more linear combinations of the predictors that yield maximal difference among the groups. One common goal of researchers using DA is to characterize the nature of group difference by…

  18. Development of a nearshore oscillating surge wave energy converter with variable geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, N. M.; Lawson, M. J.; Yu, Y. H.

    This paper presents an analysis of a novel wave energy converter concept that combines an oscillating surge wave energy converter (OSWEC) with control surfaces. The control surfaces allow for a variable device geometry that enables the hydrodynamic properties to be adapted with respect to structural loading, absorption range and power-take-off capability. The device geometry is adjusted on a sea state-to-sea state time scale and combined with wave-to-wave manipulation of the power take-off (PTO) to provide greater control over the capture efficiency, capacity factor, and design loads. This work begins with a sensitivity study of the hydrodynamic coefficients with respect tomore » device width, support structure thickness, and geometry. A linear frequency domain analysis is used to evaluate device performance in terms of absorbed power, foundation loads, and PTO torque. Previous OSWEC studies included nonlinear hydrodynamics, in response a nonlinear model that includes a quadratic viscous damping torque that was linearized via the Lorentz linearization. Inclusion of the quadratic viscous torque led to construction of an optimization problem that incorporated motion and PTO constraints. Results from this study found that, when transitioning from moderate-to-large sea states the novel OSWEC was capable of reducing structural loads while providing a near constant power output.« less

  19. Interim analyses in 2 x 2 crossover trials.

    PubMed

    Cook, R J

    1995-09-01

    A method is presented for performing interim analyses in long term 2 x 2 crossover trials with serial patient entry. The analyses are based on a linear statistic that combines data from individuals observed for one treatment period with data from individuals observed for both periods. The coefficients in this linear combination can be chosen quite arbitrarily, but we focus on variance-based weights to maximize power for tests regarding direct treatment effects. The type I error rate of this procedure is controlled by utilizing the joint distribution of the linear statistics over analysis stages. Methods for performing power and sample size calculations are indicated. A two-stage sequential design involving simultaneous patient entry and a single between-period interim analysis is considered in detail. The power and average number of measurements required for this design are compared to those of the usual crossover trial. The results indicate that, while there is minimal loss in power relative to the usual crossover design in the absence of differential carry-over effects, the proposed design can have substantially greater power when differential carry-over effects are present. The two-stage crossover design can also lead to more economical studies in terms of the expected number of measurements required, due to the potential for early stopping. Attention is directed toward normally distributed responses.

  20. Combining angular differential imaging and accurate polarimetry with SPHERE/IRDIS to characterize young giant exoplanets

    NASA Astrophysics Data System (ADS)

    van Holstein, Rob G.; Snik, Frans; Girard, Julien H.; de Boer, Jozua; Ginski, C.; Keller, Christoph U.; Stam, Daphne M.; Beuzit, Jean-Luc; Mouillet, David; Kasper, Markus; Langlois, Maud; Zurlo, Alice; de Kok, Remco J.; Vigan, Arthur

    2017-09-01

    Young giant exoplanets emit infrared radiation that can be linearly polarized up to several percent. This linear polarization can trace: 1) the presence of atmospheric cloud and haze layers, 2) spatial structure, e.g. cloud bands and rotational flattening, 3) the spin axis orientation and 4) particle sizes and cloud top pressure. We introduce a novel high-contrast imaging scheme that combines angular differential imaging (ADI) and accurate near-infrared polarimetry to characterize self-luminous giant exoplanets. We implemented this technique at VLT/SPHEREIRDIS and developed the corresponding observing strategies, the polarization calibration and the data-reduction approaches. The combination of ADI and polarimetry is challenging, because the field rotation required for ADI negatively affects the polarimetric performance. By combining ADI and polarimetry we can characterize planets that can be directly imaged with a very high signal-to-noise ratio. We use the IRDIS pupil-tracking mode and combine ADI and principal component analysis to reduce speckle noise. We take advantage of IRDIS' dual-beam polarimetric mode to eliminate differential effects that severely limit the polarimetric sensitivity (flat-fielding errors, differential aberrations and seeing), and thus further suppress speckle noise. To correct for instrumental polarization effects, we apply a detailed Mueller matrix model that describes the telescope and instrument and that has an absolute polarimetric accuracy <= 0.1%. Using this technique we have observed the planets of HR 8799 and the (sub-stellar) companion PZ Tel B. Unfortunately, we do not detect a polarization signal in a first analysis. We estimate preliminary 1σ upper limits on the degree of linear polarization of ˜ 1% and ˜ 0.1% for the planets of HR 8799 and PZ Tel B, respectively. The achieved sub-percent sensitivity and accuracy show that our technique has great promise for characterizing exoplanets through direct-imaging polarimetry

  1. Mechanically fastened composite laminates subjected to combined bearing-bypass and shear loading

    NASA Technical Reports Server (NTRS)

    Madenci, Erdogan

    1993-01-01

    Bolts and rivets provide a means of load transfer in the construction of aircraft. However, they give rise to stress concentrations and are often the source and location of static and fatigue failures. Furthermore, fastener holes are prone to cracks during take-off and landing. These cracks present the most common origin of structural failures in aircraft. Therefore, accurate determination of the contact stresses associated with such loaded holes in mechanically fastened joints is essential to reliable strength evaluation and failure prediction. As the laminate is subjected to loading, the contact region, whose extent is not known, develops between the fastener and the hole boundary through this contact region, which consists of slip and no-slip zones due to friction. The presence of the unknown contact stress distribution over the contact region between the pin and the composite laminate, material anisotropy, friction between the pin and the laminate, pin-hole clearance, combined bearing-bypass and shear loading, and finite geometry of the laminate result in a complex non-linear problem. In the case of bearing-bypass loading in compression, this non-linear problem is further complicated by the presence of dual contact regions. Previous research concerning the analysis of mechanical joints subjected to combined bearing-bypass and shear loading is non-existent. In the case of bearing-bypass loading only, except for the study conducted by Naik and Crews (1991), others employed the concept of superposition which is not valid for this non-linear problem. Naik and Crews applied a linear finite element analysis with conditions along the pin-hole contact region specified as displacement constraint equations. The major shortcoming of this method is that the variation of the contract region as a function of the applied load should be known a priori. Also, their analysis is limited to symmetric geometry and material systems, and frictionless boundary conditions. Since the contact stress distribution and the contact region are not known a priori, they did not directly impose the boundary conditions appropriate for modelling the contact and on-contact regions between the fastener and the hole. Furthermore, finite element analysis is not suitable for iterative design calculations for optimizing laminate construction in the presence of fasteners under complex loading conditions. In this study, the solution method developed by Madenci and Ileri (1992a,b) has been extended to determine the contact stresses in mechanical joints under combined bearing-bypass and shear loading, and bearing-bypass loading in compression resulting in dual contact regions.

  2. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-06-14

    It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.

  3. Imparting Motion to a Test Object Such as a Motor Vehicle in a Controlled Fashion

    NASA Technical Reports Server (NTRS)

    Southward, Stephen C. (Inventor); Reubush, Chandler (Inventor); Pittman, Bryan (Inventor); Roehrig, Kurt (Inventor); Gerard, Doug (Inventor)

    2014-01-01

    An apparatus imparts motion to a test object such as a motor vehicle in a controlled fashion. A base has mounted on it a linear electromagnetic motor having a first end and a second end, the first end being connected to the base. A pneumatic cylinder and piston combination have a first end and a second end, the first end connected to the base so that the pneumatic cylinder and piston combination is generally parallel with the linear electromagnetic motor. The second ends of the linear electromagnetic motor and pneumatic cylinder and piston combination being commonly linked to a mount for the test object. A control system for the linear electromagnetic motor and pneumatic cylinder and piston combination drives the pneumatic cylinder and piston combination to support a substantial static load of the test object and the linear electromagnetic motor to impart controlled motion to the test object.

  4. Manifold Learning in MR spectroscopy using nonlinear dimensionality reduction and unsupervised clustering.

    PubMed

    Yang, Guang; Raschke, Felix; Barrick, Thomas R; Howe, Franklyn A

    2015-09-01

    To investigate whether nonlinear dimensionality reduction improves unsupervised classification of (1) H MRS brain tumor data compared with a linear method. In vivo single-voxel (1) H magnetic resonance spectroscopy (55 patients) and (1) H magnetic resonance spectroscopy imaging (MRSI) (29 patients) data were acquired from histopathologically diagnosed gliomas. Data reduction using Laplacian eigenmaps (LE) or independent component analysis (ICA) was followed by k-means clustering or agglomerative hierarchical clustering (AHC) for unsupervised learning to assess tumor grade and for tissue type segmentation of MRSI data. An accuracy of 93% in classification of glioma grade II and grade IV, with 100% accuracy in distinguishing tumor and normal spectra, was obtained by LE with unsupervised clustering, but not with the combination of k-means and ICA. With (1) H MRSI data, LE provided a more linear distribution of data for cluster analysis and better cluster stability than ICA. LE combined with k-means or AHC provided 91% accuracy for classifying tumor grade and 100% accuracy for identifying normal tissue voxels. Color-coded visualization of normal brain, tumor core, and infiltration regions was achieved with LE combined with AHC. The LE method is promising for unsupervised clustering to separate brain and tumor tissue with automated color-coding for visualization of (1) H MRSI data after cluster analysis. © 2014 Wiley Periodicals, Inc.

  5. Time-frequency analysis of band-limited EEG with BMFLC and Kalman filter for BCI applications

    PubMed Central

    2013-01-01

    Background Time-Frequency analysis of electroencephalogram (EEG) during different mental tasks received significant attention. As EEG is non-stationary, time-frequency analysis is essential to analyze brain states during different mental tasks. Further, the time-frequency information of EEG signal can be used as a feature for classification in brain-computer interface (BCI) applications. Methods To accurately model the EEG, band-limited multiple Fourier linear combiner (BMFLC), a linear combination of truncated multiple Fourier series models is employed. A state-space model for BMFLC in combination with Kalman filter/smoother is developed to obtain accurate adaptive estimation. By virtue of construction, BMFLC with Kalman filter/smoother provides accurate time-frequency decomposition of the bandlimited signal. Results The proposed method is computationally fast and is suitable for real-time BCI applications. To evaluate the proposed algorithm, a comparison with short-time Fourier transform (STFT) and continuous wavelet transform (CWT) for both synthesized and real EEG data is performed in this paper. The proposed method is applied to BCI Competition data IV for ERD detection in comparison with existing methods. Conclusions Results show that the proposed algorithm can provide optimal time-frequency resolution as compared to STFT and CWT. For ERD detection, BMFLC-KF outperforms STFT and BMFLC-KS in real-time applicability with low computational requirement. PMID:24274109

  6. Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2016-01-01

    A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.

  7. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-05-13

    Here, we propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. Finally, the method has been successfully demonstrated on the NSLS-II storage ring.

  8. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. Furthermore, the fitting results are used for lattice correction. Our method has been successfully demonstrated on the NSLS-II storage ring.

  9. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. The method has been successfully demonstrated on the NSLS-II storage ring.

  10. The use of ERTS/LANDSAT imagery in relation to airborne remote sensing for terrain analysis in western Queensland, Australia

    NASA Technical Reports Server (NTRS)

    Cole, M. M.; Wen-Jones, S. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. Series of linears were identified on the March imagery of Lady Annie-Mt. Gordon fault zone area. The series with a WSW-ENE orientation which is normal to the major structural units and also several linears with NNW-SSE orientation appears to be particularly important. Copper mineralization is known at several localities where these linears are intersected by faults. Automated outputs using supervised methods involving the selection of training sets selected by visual recognition of spectral signatures on the color composites obtained from combinations of MSS bands 4, 5 and 7 projected through appropriate filters, were generated.

  11. A Combined Pharmacokinetic and Radiologic Assessment of Dynamic Contrast-Enhanced Magnetic Resonance Imaging Predicts Response to Chemoradiation in Locally Advanced Cervical Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Semple, Scott; Harry, Vanessa N. MRCOG.; Parkin, David E.

    2009-10-01

    Purpose: To investigate the combination of pharmacokinetic and radiologic assessment of dynamic contrast-enhanced magnetic resonance imaging (MRI) as an early response indicator in women receiving chemoradiation for advanced cervical cancer. Methods and Materials: Twenty women with locally advanced cervical cancer were included in a prospective cohort study. Dynamic contrast-enhanced MRI was carried out before chemoradiation, after 2 weeks of therapy, and at the conclusion of therapy using a 1.5-T MRI scanner. Radiologic assessment of uptake parameters was obtained from resultant intensity curves. Pharmacokinetic analysis using a multicompartment model was also performed. General linear modeling was used to combine radiologic andmore » pharmacokinetic parameters and correlated with eventual response as determined by change in MRI tumor size and conventional clinical response. A subgroup of 11 women underwent repeat pretherapy MRI to test pharmacokinetic reproducibility. Results: Pretherapy radiologic parameters and pharmacokinetic K{sup trans} correlated with response (p < 0.01). General linear modeling demonstrated that a combination of radiologic and pharmacokinetic assessments before therapy was able to predict more than 88% of variance of response. Reproducibility of pharmacokinetic modeling was confirmed. Conclusions: A combination of radiologic assessment with pharmacokinetic modeling applied to dynamic MRI before the start of chemoradiation improves the predictive power of either by more than 20%. The potential improvements in therapy response prediction using this type of combined analysis of dynamic contrast-enhanced MRI may aid in the development of more individualized, effective therapy regimens for this patient group.« less

  12. Abdominal girth, vertebral column length, and spread of spinal anesthesia in 30 minutes after plain bupivacaine 5 mg/mL.

    PubMed

    Zhou, Qing-he; Xiao, Wang-pin; Shen, Ying-yan

    2014-07-01

    The spread of spinal anesthesia is highly unpredictable. In patients with increased abdominal girth and short stature, a greater cephalad spread after a fixed amount of subarachnoidally administered plain bupivacaine is often observed. We hypothesized that there is a strong correlation between abdominal girth/vertebral column length and cephalad spread. Age, weight, height, body mass index, abdominal girth, and vertebral column length were recorded for 114 patients. The L3-L4 interspace was entered, and 3 mL of 0.5% plain bupivacaine was injected into the subarachnoid space. The cephalad spread (loss of temperature sensation and loss of pinprick discrimination) was assessed 30 minutes after intrathecal injection. Linear regression analysis was performed for age, weight, height, body mass index, abdominal girth, vertebral column length, and the spread of spinal anesthesia, and the combined linear contribution of age up to 55 years, weight, height, abdominal girth, and vertebral column length was tested by multiple regression analysis. Linear regression analysis showed that there was a significant univariate correlation among all 6 patient characteristics evaluated and the spread of spinal anesthesia (all P < 0.039) except for age and loss of temperature sensation (P > 0.068). Multiple regression analysis showed that abdominal girth and the vertebral column length were the key determinants for spinal anesthesia spread (both P < 0.0001), whereas age, weight, and height could be omitted without changing the results (all P > 0.059, all 95% confidence limits < 0.372). Multiple regression analysis revealed that the combination of a patient's 5 general characteristics, especially abdominal girth and vertebral column length, had a high predictive value for the spread of spinal anesthesia after a given dose of plain bupivacaine.

  13. Substituting values for censored data from Texas, USA, reservoirs inflated and obscured trends in analyses commonly used for water quality target development.

    PubMed

    Grantz, Erin; Haggard, Brian; Scott, J Thad

    2018-06-12

    We calculated four median datasets (chlorophyll a, Chl a; total phosphorus, TP; and transparency) using multiple approaches to handling censored observations, including substituting fractions of the quantification limit (QL; dataset 1 = 1QL, dataset 2 = 0.5QL) and statistical methods for censored datasets (datasets 3-4) for approximately 100 Texas, USA reservoirs. Trend analyses of differences between dataset 1 and 3 medians indicated percent difference increased linearly above thresholds in percent censored data (%Cen). This relationship was extrapolated to estimate medians for site-parameter combinations with %Cen > 80%, which were combined with dataset 3 as dataset 4. Changepoint analysis of Chl a- and transparency-TP relationships indicated threshold differences up to 50% between datasets. Recursive analysis identified secondary thresholds in dataset 4. Threshold differences show that information introduced via substitution or missing due to limitations of statistical methods biased values, underestimated error, and inflated the strength of TP thresholds identified in datasets 1-3. Analysis of covariance identified differences in linear regression models relating transparency-TP between datasets 1, 2, and the more statistically robust datasets 3-4. Study findings identify high-risk scenarios for biased analytical outcomes when using substitution. These include high probability of median overestimation when %Cen > 50-60% for a single QL, or when %Cen is as low 16% for multiple QL's. Changepoint analysis was uniquely vulnerable to substitution effects when using medians from sites with %Cen > 50%. Linear regression analysis was less sensitive to substitution and missing data effects, but differences in model parameters for transparency cannot be discounted and could be magnified by log-transformation of the variables.

  14. Is 3D true non linear traveltime tomography reasonable ?

    NASA Astrophysics Data System (ADS)

    Herrero, A.; Virieux, J.

    2003-04-01

    The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.

  15. Quantifying and visualizing variations in sets of images using continuous linear optimal transport

    NASA Astrophysics Data System (ADS)

    Kolouri, Soheil; Rohde, Gustavo K.

    2014-03-01

    Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.

  16. Quantitative structure-activity relationship study of P2X7 receptor inhibitors using combination of principal component analysis and artificial intelligence methods.

    PubMed

    Ahmadi, Mehdi; Shahlaei, Mohsen

    2015-01-01

    P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure-activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7-7-1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure-activity relationship model suggested is robust and satisfactory.

  17. Quantitative structure–activity relationship study of P2X7 receptor inhibitors using combination of principal component analysis and artificial intelligence methods

    PubMed Central

    Ahmadi, Mehdi; Shahlaei, Mohsen

    2015-01-01

    P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure–activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7−7−1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure–activity relationship model suggested is robust and satisfactory. PMID:26600858

  18. Under which climate and soil conditions the plant productivity-precipitation relationship is linear or nonlinear?

    PubMed

    Ye, Jian-Sheng; Pei, Jiu-Ying; Fang, Chao

    2018-03-01

    Understanding under which climate and soil conditions the plant productivity-precipitation relationship is linear or nonlinear is useful for accurately predicting the response of ecosystem function to global environmental change. Using long-term (2000-2016) net primary productivity (NPP)-precipitation datasets derived from satellite observations, we identify >5600pixels in the North Hemisphere landmass that fit either linear or nonlinear temporal NPP-precipitation relationships. Differences in climate (precipitation, radiation, ratio of actual to potential evapotranspiration, temperature) and soil factors (nitrogen, phosphorous, organic carbon, field capacity) between the linear and nonlinear types are evaluated. Our analysis shows that both linear and nonlinear types exhibit similar interannual precipitation variabilities and occurrences of extreme precipitation. Permutational multivariate analysis of variance suggests that linear and nonlinear types differ significantly regarding to radiation, ratio of actual to potential evapotranspiration, and soil factors. The nonlinear type possesses lower radiation and/or less soil nutrients than the linear type, thereby suggesting that nonlinear type features higher degree of limitation from resources other than precipitation. This study suggests several factors limiting the responses of plant productivity to changes in precipitation, thus causing nonlinear NPP-precipitation pattern. Precipitation manipulation and modeling experiments should combine with changes in other climate and soil factors to better predict the response of plant productivity under future climate. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Analysis of the quality of image data acquired by the LANDSAT-4 thematic mapper and multispectral scanners

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator)

    1983-01-01

    The geometric quality of the TM and MSS film products were evaluated by making selective photo measurements such as scale, linear and area determinations; and by measuring the coordinates of known features on both the film products and map products and then relating these paired observations using a standard linear least squares regression approach. Quantitative interpretation tests are described which evaluate the quality and utility of the TM film products and various band combinations for detecting and identifying important forest and agricultural features.

  20. Free torsional vibrations of tapered cantilever I-beams

    NASA Astrophysics Data System (ADS)

    Rao, C. Kameswara; Mirza, S.

    1988-08-01

    Torsional vibration characteristics of linearly tapered cantilever I-beams have been studied by using the Galerkin finite element method. A third degree polynomial is assumed for the angle of twist. The analysis presented is valid for long beams and includes the effect of warping. The individual as well as combined effects of linear tapers in the width of the flanges and the depth of the web on the torsional vibration of cantilever I-beams are investigated. Numerical results generated for various values of taper ratios are presented in graphical form.

  1. A unified development of several techniques for the representation of random vectors and data sets

    NASA Technical Reports Server (NTRS)

    Bundick, W. T.

    1973-01-01

    Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.

  2. Efficient loads analyses of Shuttle-payloads using dynamic models with linear or nonlinear interfaces

    NASA Technical Reports Server (NTRS)

    Spanos, P. D.; Cao, T. T.; Hamilton, D. A.; Nelson, D. A. R.

    1989-01-01

    An efficient method for the load analysis of Shuttle-payload systems with linear or nonlinear attachment interfaces is presented which allows the kinematics of the interface degrees of freedom at a given time to be evaluated without calculating the combined system modal representation of the Space Shuttle and its payload. For the case of a nonlinear dynamic model, an iterative procedure is employed to converge the nonlinear terms of the equations of motion to reliable values. Results are presented for a Shuttle abort landing event.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, J.D.; Woan, G.

    Data from the Laser Interferometer Space Antenna (LISA) is expected to be dominated by frequency noise from its lasers. However, the noise from any one laser appears more than once in the data and there are combinations of the data that are insensitive to this noise. These combinations, called time delay interferometry (TDI) variables, have received careful study and point the way to how LISA data analysis may be performed. Here we approach the problem from the direction of statistical inference, and show that these variables are a direct consequence of a principal component analysis of the problem. We presentmore » a formal analysis for a simple LISA model and show that there are eigenvectors of the noise covariance matrix that do not depend on laser frequency noise. Importantly, these orthogonal basis vectors correspond to linear combinations of TDI variables. As a result we show that the likelihood function for source parameters using LISA data can be based on TDI combinations of the data without loss of information.« less

  4. Probabilistic finite elements for transient analysis in nonlinear continua

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Mani, A.

    1985-01-01

    The probabilistic finite element method (PFEM), which is a combination of finite element methods and second-moment analysis, is formulated for linear and nonlinear continua with inhomogeneous random fields. Analogous to the discretization of the displacement field in finite element methods, the random field is also discretized. The formulation is simplified by transforming the correlated variables to a set of uncorrelated variables through an eigenvalue orthogonalization. Furthermore, it is shown that a reduced set of the uncorrelated variables is sufficient for the second-moment analysis. Based on the linear formulation of the PFEM, the method is then extended to transient analysis in nonlinear continua. The accuracy and efficiency of the method is demonstrated by application to a one-dimensional, elastic/plastic wave propagation problem. The moments calculated compare favorably with those obtained by Monte Carlo simulation. Also, the procedure is amenable to implementation in deterministic FEM based computer programs.

  5. Principle component analysis and linear discriminant analysis of multi-spectral autofluorescence imaging data for differentiating basal cell carcinoma and healthy skin

    NASA Astrophysics Data System (ADS)

    Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Lesnichaya, Anastasiya D.; Kudrin, Konstantin G.; Cherkasova, Olga P.; Kurlov, Vladimir N.; Shikunova, Irina A.; Perchik, Alexei V.; Yurchenko, Stanislav O.; Reshetov, Igor V.

    2016-09-01

    In present paper, an ability to differentiate basal cell carcinoma (BCC) and healthy skin by combining multi-spectral autofluorescence imaging, principle component analysis (PCA), and linear discriminant analysis (LDA) has been demonstrated. For this purpose, the experimental setup, which includes excitation and detection branches, has been assembled. The excitation branch utilizes a mercury arc lamp equipped with a 365-nm narrow-linewidth excitation filter, a beam homogenizer, and a mechanical chopper. The detection branch employs a set of bandpass filters with the central wavelength of spectral transparency of λ = 400, 450, 500, and 550 nm, and a digital camera. The setup has been used to study three samples of freshly excised BCC. PCA and LDA have been implemented to analyze the data of multi-spectral fluorescence imaging. Observed results of this pilot study highlight the advantages of proposed imaging technique for skin cancer diagnosis.

  6. Temporal Stability of GPS Transmitter Group Delay Variations.

    PubMed

    Beer, Susanne; Wanninger, Lambert

    2018-05-29

    The code observable of global navigation satellite systems (GNSS) is influenced by group delay variations (GDV) of transmitter and receiver antennas. For the Global Positioning System (GPS), the variations can sum up to 1 m in the ionosphere-free linear combination and thus can significantly affect precise code applications. The contribution of the GPS transmitters can amount to 0.8 m peak-to-peak over the entire nadir angle range. To verify the assumption of their time-invariance, we determined daily individual satellite GDV for GPS transmitter antennas over a period of more than two years. Dual-frequency observations of globally distributed reference stations and their multipath combination form the basis for our analysis. The resulting GPS GDV are stable on the level of a few centimeters for C1, P2, and for the ionosphere-free linear combination. Our study reveals that the inconsistencies of the GDV of space vehicle number (SVN) 55 with respect to earlier studies are not caused by temporal instabilities, but are rather related to receiver properties.

  7. HYDRORECESSION: A toolbox for streamflow recession analysis

    NASA Astrophysics Data System (ADS)

    Arciniega, S.

    2015-12-01

    Streamflow recession curves are hydrological signatures allowing to study the relationship between groundwater storage and baseflow and/or low flows at the catchment scale. Recent studies have showed that streamflow recession analysis can be quite sensitive to the combination of different models, extraction techniques and parameter estimation methods. In order to better characterize streamflow recession curves, new methodologies combining multiple approaches have been recommended. The HYDRORECESSION toolbox, presented here, is a Matlab graphical user interface developed to analyse streamflow recession time series with the support of different tools allowing to parameterize linear and nonlinear storage-outflow relationships through four of the most useful recession models (Maillet, Boussinesq, Coutagne and Wittenberg). The toolbox includes four parameter-fitting techniques (linear regression, lower envelope, data binning and mean squared error) and three different methods to extract hydrograph recessions segments (Vogel, Brutsaert and Aksoy). In addition, the toolbox has a module that separates the baseflow component from the observed hydrograph using the inverse reservoir algorithm. Potential applications provided by HYDRORECESSION include model parameter analysis, hydrological regionalization and classification, baseflow index estimates, catchment-scale recharge and low-flows modelling, among others. HYDRORECESSION is freely available for non-commercial and academic purposes.

  8. Development of automated extraction method of biliary tract from abdominal CT volumes based on local intensity structure analysis

    NASA Astrophysics Data System (ADS)

    Koga, Kusuto; Hayashi, Yuichiro; Hirose, Tomoaki; Oda, Masahiro; Kitasaka, Takayuki; Igami, Tsuyoshi; Nagino, Masato; Mori, Kensaku

    2014-03-01

    In this paper, we propose an automated biliary tract extraction method from abdominal CT volumes. The biliary tract is the path by which bile is transported from liver to the duodenum. No extraction method have been reported for the automated extraction of the biliary tract from common contrast CT volumes. Our method consists of three steps including: (1) extraction of extrahepatic bile duct (EHBD) candidate regions, (2) extraction of intrahepatic bile duct (IHBD) candidate regions, and (3) combination of these candidate regions. The IHBD has linear structures and intensities of the IHBD are low in CT volumes. We use a dark linear structure enhancement (DLSE) filter based on a local intensity structure analysis method using the eigenvalues of the Hessian matrix for the IHBD candidate region extraction. The EHBD region is extracted using a thresholding process and a connected component analysis. In the combination process, we connect the IHBD candidate regions to each EHBD candidate region and select a bile duct region from the connected candidate regions. We applied the proposed method to 22 cases of CT volumes. An average Dice coefficient of extraction result was 66.7%.

  9. Effect of preventive zinc supplementation on linear growth in children under 5 years of age in developing countries: a meta-analysis of studies for input to the lives saved tool

    PubMed Central

    2011-01-01

    Introduction Zinc plays an important role in cellular growth, cellular differentiation and metabolism. The results of previous meta-analyses evaluating effect of zinc supplementation on linear growth are inconsistent. We have updated and evaluated the available evidence according to Grading of Recommendations, Assessment, Development and Evaluation (GRADE) criteria and tried to explain the difference in results of the previous reviews. Methods A literature search was done on PubMed, Cochrane Library, IZiNCG database and WHO regional data bases using different terms for zinc and linear growth (height). Data were abstracted in a standardized form. Data were analyzed in two ways i.e. weighted mean difference (effect size) and pooled mean difference for absolute increment in length in centimeters. Random effect models were used for these pooled estimates. We have given our recommendations for effectiveness of zinc supplementation in the form of absolute increment in length (cm) in zinc supplemented group compared to control for input to Live Saves Tool (LiST). Results There were thirty six studies assessing the effect of zinc supplementation on linear growth in children < 5 years from developing countries. In eleven of these studies, zinc was given in combination with other micronutrients (iron, vitamin A, etc). The final effect size after pooling all the data sets (zinc ± iron etc) showed a significant positive effect of zinc supplementation on linear growth [Effect size: 0.13 (95% CI 0.04, 0.21), random model] in the developing countries. A subgroup analysis by excluding those data sets where zinc was supplemented in combination with iron showed a more pronounced effect of zinc supplementation on linear growth [Weighed mean difference 0.19 (95 % CI 0.08, 0.30), random model]. A subgroup analysis from studies that reported actual increase in length (cm) showed that a dose of 10 mg zinc/day for duration of 24 weeks led to a net a gain of 0.37 (±0.25) cm in zinc supplemented group compared to placebo. This estimate is recommended for inclusion in Lives Saved Tool (LiST) model. Conclusions Zinc supplementation has a significant positive effect on linear growth, especially when administered alone, and should be included in national strategies to reduce stunting in children < 5 years of age in developing countries. PMID:21501440

  10. Method for factor analysis of GC/MS data

    DOEpatents

    Van Benthem, Mark H; Kotula, Paul G; Keenan, Michael R

    2012-09-11

    The method of the present invention provides a fast, robust, and automated multivariate statistical analysis of gas chromatography/mass spectroscopy (GC/MS) data sets. The method can involve systematic elimination of undesired, saturated peak masses to yield data that follow a linear, additive model. The cleaned data can then be subjected to a combination of PCA and orthogonal factor rotation followed by refinement with MCR-ALS to yield highly interpretable results.

  11. Novel hybrid linear stochastic with non-linear extreme learning machine methods for forecasting monthly rainfall a tropical climate.

    PubMed

    Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein

    2018-09-15

    A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2  = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Sparse principal component analysis in medical shape modeling

    NASA Astrophysics Data System (ADS)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  13. Stationary-phase optimized selectivity liquid chromatography: development of a linear gradient prediction algorithm.

    PubMed

    De Beer, Maarten; Lynen, Fréderic; Chen, Kai; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat

    2010-03-01

    Stationary-phase optimized selectivity liquid chromatography (SOS-LC) is a tool in reversed-phase LC (RP-LC) to optimize the selectivity for a given separation by combining stationary phases in a multisegment column. The presently (commercially) available SOS-LC optimization procedure and algorithm are only applicable to isocratic analyses. Step gradient SOS-LC has been developed, but this is still not very elegant for the analysis of complex mixtures composed of components covering a broad hydrophobicity range. A linear gradient prediction algorithm has been developed allowing one to apply SOS-LC as a generic RP-LC optimization method. The algorithm allows operation in isocratic, stepwise, and linear gradient run modes. The features of SOS-LC in the linear gradient mode are demonstrated by means of a mixture of 13 steroids, whereby baseline separation is predicted and experimentally demonstrated.

  14. Exploration for fractured petroleum reservoirs using radar/Landsat merge combinations

    NASA Technical Reports Server (NTRS)

    Macdonald, H.; Waite, W.; Borengasser, M.; Tolman, D.; Elachi, C.

    1981-01-01

    Since fractures are commonly propagated upward and reflected at the earth's surface as subtle linears, detection of these surface features is extremely important in many phases of petroleum exploration and development. To document the usefulness of microwave analysis for petroleum exploration, the Arkansas part of the Arkoma basin is selected as a prime test site. The research plan involves comparing the aircraft microwave imagery and Landsat imagery in an area where significant subsurface borehole geophysical data are available. In the northern Arkoma basin, a positive correlation between the number of linears in a given area and production from cherty carbonate strata is found. In the southern part of the basin, little relationship is discernible between surface structure and gas production, and no correlation is found between gas productivity and linear proximity or linear density as determined from remote sensor data.

  15. Combination of dynamic Bayesian network classifiers for the recognition of degraded characters

    NASA Astrophysics Data System (ADS)

    Likforman-Sulem, Laurence; Sigelle, Marc

    2009-01-01

    We investigate in this paper the combination of DBN (Dynamic Bayesian Network) classifiers, either independent or coupled, for the recognition of degraded characters. The independent classifiers are a vertical HMM and a horizontal HMM whose observable outputs are the image columns and the image rows respectively. The coupled classifiers, presented in a previous study, associate the vertical and horizontal observation streams into single DBNs. The scores of the independent and coupled classifiers are then combined linearly at the decision level. We compare the different classifiers -independent, coupled or linearly combined- on two tasks: the recognition of artificially degraded handwritten digits and the recognition of real degraded old printed characters. Our results show that coupled DBNs perform better on degraded characters than the linear combination of independent HMM scores. Our results also show that the best classifier is obtained by linearly combining the scores of the best coupled DBN and the best independent HMM.

  16. Machine-learning-based calving prediction from activity, lying, and ruminating behaviors in dairy cattle.

    PubMed

    Borchers, M R; Chang, Y M; Proudfoot, K L; Wadsworth, B A; Stone, A E; Bewley, J M

    2017-07-01

    The objective of this study was to use automated activity, lying, and rumination monitors to characterize prepartum behavior and predict calving in dairy cattle. Data were collected from 20 primiparous and 33 multiparous Holstein dairy cattle from September 2011 to May 2013 at the University of Kentucky Coldstream Dairy. The HR Tag (SCR Engineers Ltd., Netanya, Israel) automatically collected neck activity and rumination data in 2-h increments. The IceQube (IceRobotics Ltd., South Queensferry, United Kingdom) automatically collected number of steps, lying time, standing time, number of transitions from standing to lying (lying bouts), and total motion, summed in 15-min increments. IceQube data were summed in 2-h increments to match HR Tag data. All behavioral data were collected for 14 d before the predicted calving date. Retrospective data analysis was performed using mixed linear models to examine behavioral changes by day in the 14 d before calving. Bihourly behavioral differences from baseline values over the 14 d before calving were also evaluated using mixed linear models. Changes in daily rumination time, total motion, lying time, and lying bouts occurred in the 14 d before calving. In the bihourly analysis, extreme values for all behaviors occurred in the final 24 h, indicating that the monitored behaviors may be useful in calving prediction. To determine whether technologies were useful at predicting calving, random forest, linear discriminant analysis, and neural network machine-learning techniques were constructed and implemented using R version 3.1.0 (R Foundation for Statistical Computing, Vienna, Austria). These methods were used on variables from each technology and all combined variables from both technologies. A neural network analysis that combined variables from both technologies at the daily level yielded 100.0% sensitivity and 86.8% specificity. A neural network analysis that combined variables from both technologies in bihourly increments was used to identify 2-h periods in the 8 h before calving with 82.8% sensitivity and 80.4% specificity. Changes in behavior and machine-learning alerts indicate that commercially marketed behavioral monitors may have calving prediction potential. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Comparison of two weighted integration models for the cueing task: linear and likelihood

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2003-01-01

    In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.

  18. Landsat test of diffuse reflectance models for aquatic suspended solids measurement

    NASA Technical Reports Server (NTRS)

    Munday, J. C., Jr.; Alfoldi, T. T.

    1979-01-01

    Landsat radiance data were used to test mathematical models relating diffuse reflectance to aquatic suspended solids concentration. Digital CCT data for Landsat passes over the Bay of Fundy, Nova Scotia were analyzed on a General Electric Co. Image 100 multispectral analysis system. Three data sets were studied separately and together in all combinations with and without solar angle correction. Statistical analysis and chromaticity analysis show that a nonlinear relationship between Landsat radiance and suspended solids concentration is better at curve-fitting than a linear relationship. In particular, the quasi-single-scattering diffuse reflectance model developed by Gordon and coworkers is corroborated. The Gordon model applied to 33 points of MSS 5 data combined from three dates produced r = 0.98.

  19. Development of parallel line analysis criteria for recombinant adenovirus potency assay and definition of a unit of potency.

    PubMed

    Ogawa, Yasushi; Fawaz, Farah; Reyes, Candice; Lai, Julie; Pungor, Erno

    2007-01-01

    Parameter settings of a parallel line analysis procedure were defined by applying statistical analysis procedures to the absorbance data from a cell-based potency bioassay for a recombinant adenovirus, Adenovirus 5 Fibroblast Growth Factor-4 (Ad5FGF-4). The parallel line analysis was performed with a commercially available software, PLA 1.2. The software performs Dixon outlier test on replicates of the absorbance data, performs linear regression analysis to define linear region of the absorbance data, and tests parallelism between the linear regions of standard and sample. Width of Fiducial limit, expressed as a percent of the measured potency, was developed as a criterion for rejection of the assay data and to significantly improve the reliability of the assay results. With the linear range-finding criteria of the software set to a minimum of 5 consecutive dilutions and best statistical outcome, and in combination with the Fiducial limit width acceptance criterion of <135%, 13% of the assay results were rejected. With these criteria applied, the assay was found to be linear over the range of 0.25 to 4 relative potency units, defined as the potency of the sample normalized to the potency of Ad5FGF-4 standard containing 6 x 10(6) adenovirus particles/mL. The overall precision of the assay was estimated to be 52%. Without the application of Fiducial limit width criterion, the assay results were not linear over the range, and an overall precision of 76% was calculated from the data. An absolute unit of potency for the assay was defined by using the parallel line analysis procedure as the amount of Ad5FGF-4 that results in an absorbance value that is 121% of the average absorbance readings of the wells containing cells not infected with the adenovirus.

  20. Clustering performance comparison using K-means and expectation maximization algorithms.

    PubMed

    Jung, Yong Gyu; Kang, Min Soo; Heo, Jun

    2014-11-14

    Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.

  1. Multi-flexible-body analysis for application to wind turbine control design

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon

    The objective of the present research is to build a theoretical and computational framework for the aeroelastic analysis of flexible rotating systems, more specifically with special application to a wind turbine control design. The methodology is based on the integration of Kane's approach for the analysis of the multi-rigid-body subsystem and a mixed finite element method for the analysis of the flexible-body subsystem. The combined analysis is then strongly coupled with an aerodynamic model based on Blade Element Momentum theory for inflow model. The unified framework from the analysis of subsystems is represented as, in a symbolic manner, a set of nonlinear ordinary differential equations with time-variant, periodic coefficients, which describe the aeroelastic behavior of whole system. The framework can be directly applied to control design due to its symbolic characteristics. The solution procedures for the equations are presented for the study of nonlinear simulation, periodic steady-state solution, and Floquet stability of the linearized system about the steady-state solution. Finally the linear periodic system equation can be obtained with both system and control matrices as explicit functions of time, which can be directly applicable to control design. The structural model is validated by comparison of its results with those from software, some of which is commercial. The stability of the linearized system about periodic steady-state solution is different from that obtained about a constant steady-state solution, which have been conventional in the field of wind turbine dynamics. Parametric studies are performed on a wind turbine model with various pitch angles, precone angles, and rotor speeds. Combined with composite material, their effects on wind turbine aeroelastic stability are investigated. Finally it is suggested that the aeroelastic stability analysis and control design for the whole system is crucial for the design of wind turbines, and the present research breaks new ground in the ability to treat the issue.

  2. Stability and time-domain analysis of the dispersive tristability in microresonators under modal coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumeige, Yannick; Feron, Patrice

    Coupled nonlinear resonators have potential applications for the integration of multistable photonic devices. The dynamic properties of two coupled-mode nonlinear microcavities made of Kerr material are studied by linear stability analysis. Using a suitable combination of the modal coupling rate and the frequency detuning, it is possible to obtain configurations where a hysteresis loop is included inside other bistable cycles. We show that a single resonator with two modes both linearly and nonlinearly coupled via the cross-Kerr effect can have a multistable behavior. This could be implemented in semiconductor nonlinear whispering-gallery-mode microresonators under modal coupling for all optical signal processingmore » or ternary optical logic applications.« less

  3. Financial Distress Prediction using Linear Discriminant Analysis and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Santoso, Noviyanti; Wibowo, Wahyu

    2018-03-01

    A financial difficulty is the early stages before the bankruptcy. Bankruptcies caused by the financial distress can be seen from the financial statements of the company. The ability to predict financial distress became an important research topic because it can provide early warning for the company. In addition, predicting financial distress is also beneficial for investors and creditors. This research will be made the prediction model of financial distress at industrial companies in Indonesia by comparing the performance of Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) combined with variable selection technique. The result of this research is prediction model based on hybrid Stepwise-SVM obtains better balance among fitting ability, generalization ability and model stability than the other models.

  4. An analysis of a nonlinear instability in the implementation of a VTOL control system

    NASA Technical Reports Server (NTRS)

    Weber, J. M.

    1982-01-01

    The contributions to nonlinear behavior and unstable response of the model following yaw control system of a VTOL aircraft during hover were determined. The system was designed as a state rate feedback implicit model follower that provided yaw rate command/heading hold capability and used combined full authority parallel and limited authority series servo actuators to generate an input to the yaw reaction control system of the aircraft. Both linear and nonlinear system models, as well as describing function linearization techniques were used to determine the influence on the control system instability of input magnitude and bandwidth, series servo authority, and system bandwidth. Results of the analysis describe stability boundaries as a function of these system design characteristics.

  5. Dispersive optical solitons and modulation instability analysis of Schrödinger-Hirota equation with spatio-temporal dispersion and Kerr law nonlinearity

    NASA Astrophysics Data System (ADS)

    Inc, Mustafa; Aliyu, Aliyu Isa; Yusuf, Abdullahi; Baleanu, Dumitru

    2018-01-01

    This paper obtains the dark, bright, dark-bright or combined optical and singular solitons to the perturbed nonlinear Schrödinger-Hirota equation (SHE) with spatio-temporal dispersion (STD) and Kerr law nonlinearity in optical fibers. The integration algorithm is the Sine-Gordon equation method (SGEM). Furthermore, the modulation instability analysis (MI) of the equation is studied based on the standard linear-stability analysis and the MI gain spectrum is got.

  6. A method for the analysis of nonlinearities in aircraft dynamic response to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1976-01-01

    An analytical method is developed which combines the equivalent linearization technique for the analysis of the response of nonlinear dynamic systems with the amplitude modulated random process (Press model) for atmospheric turbulence. The method is initially applied to a bilinear spring system. The analysis of the response shows good agreement with exact results obtained by the Fokker-Planck equation. The method is then applied to an example of control-surface displacement limiting in an aircraft with a pitch-hold autopilot.

  7. Spectral analysis and multigrid preconditioners for two-dimensional space-fractional diffusion equations

    NASA Astrophysics Data System (ADS)

    Moghaderi, Hamid; Dehghan, Mehdi; Donatelli, Marco; Mazza, Mariarosa

    2017-12-01

    Fractional diffusion equations (FDEs) are a mathematical tool used for describing some special diffusion phenomena arising in many different applications like porous media and computational finance. In this paper, we focus on a two-dimensional space-FDE problem discretized by means of a second order finite difference scheme obtained as combination of the Crank-Nicolson scheme and the so-called weighted and shifted Grünwald formula. By fully exploiting the Toeplitz-like structure of the resulting linear system, we provide a detailed spectral analysis of the coefficient matrix at each time step, both in the case of constant and variable diffusion coefficients. Such a spectral analysis has a very crucial role, since it can be used for designing fast and robust iterative solvers. In particular, we employ the obtained spectral information to define a Galerkin multigrid method based on the classical linear interpolation as grid transfer operator and damped-Jacobi as smoother, and to prove the linear convergence rate of the corresponding two-grid method. The theoretical analysis suggests that the proposed grid transfer operator is strong enough for working also with the V-cycle method and the geometric multigrid. On this basis, we introduce two computationally favourable variants of the proposed multigrid method and we use them as preconditioners for Krylov methods. Several numerical results confirm that the resulting preconditioning strategies still keep a linear convergence rate.

  8. EEG and MEG data analysis in SPM8.

    PubMed

    Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl

    2011-01-01

    SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.

  9. EEG and MEG Data Analysis in SPM8

    PubMed Central

    Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl

    2011-01-01

    SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. PMID:21437221

  10. Combined chemometric analysis of (1)H NMR, (13)C NMR and stable isotope data to differentiate organic and conventional milk.

    PubMed

    Erich, Sarah; Schill, Sandra; Annweiler, Eva; Waiblinger, Hans-Ulrich; Kuballa, Thomas; Lachenmeier, Dirk W; Monakhova, Yulia B

    2015-12-01

    The increased sales of organically produced food create a strong need for analytical methods, which could authenticate organic and conventional products. Combined chemometric analysis of (1)H NMR-, (13)C NMR-spectroscopy data, stable-isotope data (IRMS) and α-linolenic acid content (gas chromatography) was used to differentiate organic and conventional milk. In total 85 raw, pasteurized and ultra-heat treated (UHT) milk samples (52 organic and 33 conventional) were collected between August 2013 and May 2014. The carbon isotope ratios of milk protein and milk fat as well as the α-linolenic acid content of these samples were determined. Additionally, the milk fat was analyzed by (1)H and (13)C NMR spectroscopy. The chemometric analysis of combined data (IRMS, GC, NMR) resulted in more precise authentication of German raw and retail milk with a considerably increased classification rate of 95% compared to 81% for NMR and 90% for IRMS using linear discriminate analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Differentiation of Organically and Conventionally Grown Tomatoes by Chemometric Analysis of Combined Data from Proton Nuclear Magnetic Resonance and Mid-infrared Spectroscopy and Stable Isotope Analysis.

    PubMed

    Hohmann, Monika; Monakhova, Yulia; Erich, Sarah; Christoph, Norbert; Wachter, Helmut; Holzgrabe, Ulrike

    2015-11-04

    Because the basic suitability of proton nuclear magnetic resonance spectroscopy ((1)H NMR) to differentiate organic versus conventional tomatoes was recently proven, the approach to optimize (1)H NMR classification models (comprising overall 205 authentic tomato samples) by including additional data of isotope ratio mass spectrometry (IRMS, δ(13)C, δ(15)N, and δ(18)O) and mid-infrared (MIR) spectroscopy was assessed. Both individual and combined analytical methods ((1)H NMR + MIR, (1)H NMR + IRMS, MIR + IRMS, and (1)H NMR + MIR + IRMS) were examined using principal component analysis (PCA), partial least squares discriminant analysis (PLS-DA), linear discriminant analysis (LDA), and common components and specific weight analysis (ComDim). With regard to classification abilities, fused data of (1)H NMR + MIR + IRMS yielded better validation results (ranging between 95.0 and 100.0%) than individual methods ((1)H NMR, 91.3-100%; MIR, 75.6-91.7%), suggesting that the combined examination of analytical profiles enhances authentication of organically produced tomatoes.

  12. Assessing the Utility of Compound Trait Estimates of Narrow Personality Traits.

    PubMed

    Credé, Marcus; Harms, Peter D; Blacksmith, Nikki; Wood, Dustin

    2016-01-01

    It has been argued that approximations of narrow traits can be made through linear combinations of broad traits such as the Big Five personality traits. Indeed, Hough and Ones ( 2001 ) used a qualitative analysis of scale content to arrive at a taxonomy of how Big Five traits might be combined to approximate various narrow traits. However, the utility of such compound trait approximations has yet to be established beyond specific cases such as integrity and customer service orientation. Using data from the Eugene-Springfield Community Sample (Goldberg, 2008 ), we explore the ability of linear composites of scores on Big Five traits to approximate scores on 127 narrow trait measures from 5 well-known non-Big-Five omnibus measures of personality. Our findings indicate that individuals' standing on more than 30 narrow traits can be well estimated from 3 different types of linear composites of scores on Big Five traits without a substantial sacrifice in criterion validity. We discuss theoretical accounts for why such relationships exist as well as the theoretical and practical implications of these findings for researchers and practitioners.

  13. A blood pressure monitor with robust noise reduction system under linear cuff inflation and deflation.

    PubMed

    Usuda, Takashi; Kobayashi, Naoki; Takeda, Sunao; Kotake, Yoshifumi

    2010-01-01

    We have developed the non-invasive blood pressure monitor which can measure the blood pressure quickly and robustly. This monitor combines two measurement mode: the linear inflation and the linear deflation. On the inflation mode, we realized a faster measurement with rapid inflation rate. On the deflation mode, we realized a robust noise reduction. When there is neither noise nor arrhythmia, the inflation mode incorporated on this monitor provides precise, quick and comfortable measurement. Once the inflation mode fails to calculate appropriate blood pressure due to body movement or arrhythmia, then the monitor switches automatically to the deflation mode and measure blood pressure by using digital signal processing as wavelet analysis, filter bank, filter combined with FFT and Inverse FFT. The inflation mode succeeded 2440 measurements out of 3099 measurements (79%) in an operating room and a rehabilitation room. The new designed blood pressure monitor provides the fastest measurement for patient with normal circulation and robust measurement for patients with body movement or severe arrhythmia. Also this fast measurement method provides comfortableness for patients.

  14. Establishing a Spinal Injury Criterion for Military Seats

    DTIC Science & Technology

    1997-01-01

    Table represents 54 Trials (18 [phase I] + 36 [phase II]); "Combined Effects" of Delta V, Gpk & ATD Size illM-l A General Linear Model (GLM) analysis...5thpercentilemale AID would not have compliedwith the tolerance criterion under the higher impulse severity levels (i.e., 20 and 30 Gpk ). Similarly, the

  15. Difference-Equation/Flow-Graph Circuit Analysis

    NASA Technical Reports Server (NTRS)

    Mcvey, I. M.

    1988-01-01

    Numerical technique enables rapid, approximate analyses of electronic circuits containing linear and nonlinear elements. Practiced in variety of computer languages on large and small computers; for circuits simple enough, programmable hand calculators used. Although some combinations of circuit elements make numerical solutions diverge, enables quick identification of divergence and correction of circuit models to make solutions converge.

  16. Optimization of an electromagnetic linear actuator using a network and a finite element model

    NASA Astrophysics Data System (ADS)

    Neubert, Holger; Kamusella, Alfred; Lienig, Jens

    2011-03-01

    Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.

  17. Modal characteristics of a simplified brake rotor model using semi-analytical Rayleigh Ritz method

    NASA Astrophysics Data System (ADS)

    Zhang, F.; Cheng, L.; Yam, L. H.; Zhou, L. M.

    2006-10-01

    Emphasis of this paper is given to the modal characteristics of a brake rotor which is utilized in automotive disc brake system. The brake rotor is modeled as a combined structure comprising an annular plate connected to a segment of cylindrical shell by distributed artificial springs. Modal analysis shows the existence of three types of modes for the combined structure, depending on the involvement of each substructure. A decomposition technique is proposed, allowing each mode of the combined structure to be decomposed into a linear combination of the individual substructure modes. It is shown that the decomposition coefficients provide a direct and systematic means to carry out modal classification and quantification.

  18. Multiple-region directed functional connectivity based on phase delays.

    PubMed

    Goelman, Gadi; Dan, Rotem

    2017-03-01

    Network analysis is increasingly advancing the field of neuroimaging. Neural networks are generally constructed from pairwise interactions with an assumption of linear relations between them. Here, a high-order statistical framework to calculate directed functional connectivity among multiple regions, using wavelet analysis and spectral coherence has been presented. The mathematical expression for 4 regions was derived and used to characterize a quartet of regions as a linear, combined (nonlinear), or disconnected network. Phase delays between regions were used to obtain network's temporal hierarchy and directionality. The validity of the mathematical derivation along with the effects of coupling strength and noise on its outcomes were studied by computer simulations of the Kuramoto model. The simulations demonstrated correct directionality for a large range of coupling strength and low sensitivity to Gaussian noise compared with pairwise coherences. The analysis was applied to resting-state fMRI data of 40 healthy young subjects to characterize the ventral visual system, motor system and default mode network (DMN). It was shown that the ventral visual system was predominantly composed of linear networks while the motor system and the DMN were composed of combined (nonlinear) networks. The ventral visual system exhibits its known temporal hierarchy, the motor system exhibits center ↔ out hierarchy and the DMN has dorsal ↔ ventral and anterior ↔ posterior organizations. The analysis can be applied in different disciplines such as seismology, or economy and in a variety of brain data including stimulus-driven fMRI, electrophysiology, EEG, and MEG, thus open new horizons in brain research. Hum Brain Mapp 38:1374-1386, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Power calculations for likelihood ratio tests for offspring genotype risks, maternal effects, and parent-of-origin (POO) effects in the presence of missing parental genotypes when unaffected siblings are available.

    PubMed

    Rampersaud, E; Morris, R W; Weinberg, C R; Speer, M C; Martin, E R

    2007-01-01

    Genotype-based likelihood-ratio tests (LRT) of association that examine maternal and parent-of-origin effects have been previously developed in the framework of log-linear and conditional logistic regression models. In the situation where parental genotypes are missing, the expectation-maximization (EM) algorithm has been incorporated in the log-linear approach to allow incomplete triads to contribute to the LRT. We present an extension to this model which we call the Combined_LRT that incorporates additional information from the genotypes of unaffected siblings to improve assignment of incompletely typed families to mating type categories, thereby improving inference of missing parental data. Using simulations involving a realistic array of family structures, we demonstrate the validity of the Combined_LRT under the null hypothesis of no association and provide power comparisons under varying levels of missing data and using sibling genotype data. We demonstrate the improved power of the Combined_LRT compared with the family-based association test (FBAT), another widely used association test. Lastly, we apply the Combined_LRT to a candidate gene analysis in Autism families, some of which have missing parental genotypes. We conclude that the proposed log-linear model will be an important tool for future candidate gene studies, for many complex diseases where unaffected siblings can often be ascertained and where epigenetic factors such as imprinting may play a role in disease etiology.

  20. Modelling interactions of acid–base balance and respiratory status in the toxicity of metal mixtures in the American oyster Crassostrea virginica

    PubMed Central

    Macey, Brett M.; Jenny, Matthew J.; Williams, Heidi R.; Thibodeaux, Lindy K.; Beal, Marion; Almeida, Jonas S.; Cunningham, Charles; Mancia, Annalaura; Warr, Gregory W.; Burge, Erin J.; Holland, A. Fred; Gross, Paul S.; Hikima, Sonomi; Burnett, Karen G.; Burnett, Louis; Chapman, Robert W.

    2010-01-01

    Heavy metals, such as copper, zinc and cadmium, represent some of the most common and serious pollutants in coastal estuaries. In the present study, we used a combination of linear and artificial neural network (ANN) modelling to detect and explore interactions among low-dose mixtures of these heavy metals and their impacts on fundamental physiological processes in tissues of the Eastern oyster, Crassostrea virginica. Animals were exposed to Cd (0.001–0.400 µM), Zn (0.001–3.059 µM) or Cu (0.002–0.787 µM), either alone or in combination for 1 to 27 days. We measured indicators of acid–base balance (hemolymph pH and total CO2), gas exchange (Po2), immunocompetence (total hemocyte counts, numbers of invasive bacteria), antioxidant status (glutathione, GSH), oxidative damage (lipid peroxidation; LPx), and metal accumulation in the gill and the hepatopancreas. Linear analysis showed that oxidative membrane damage from tissue accumulation of environmental metals was correlated with impaired acid–base balance in oysters. ANN analysis revealed interactions of metals with hemolymph acid–base chemistry in predicting oxidative damage that were not evident from linear analyses. These results highlight the usefulness of machine learning approaches, such as ANNs, for improving our ability to recognize and understand the effects of subacute exposure to contaminant mixtures. PMID:19958840

  1. Bayesian inference on risk differences: an application to multivariate meta-analysis of adverse events in clinical trials.

    PubMed

    Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng

    2013-05-01

    Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.

  2. Linear and nonlinear subspace analysis of hand movements during grasping.

    PubMed

    Cui, Phil Hengjun; Visell, Yon

    2014-01-01

    This study investigated nonlinear patterns of coordination, or synergies, underlying whole-hand grasping kinematics. Prior research has shed considerable light on roles played by such coordinated degrees-of-freedom (DOF), illuminating how motor control is facilitated by structural and functional specializations in the brain, peripheral nervous system, and musculoskeletal system. However, existing analyses suppose that the patterns of coordination can be captured by means of linear analyses, as linear combinations of nominally independent DOF. In contrast, hand kinematics is itself highly nonlinear in nature. To address this discrepancy, we sought to to determine whether nonlinear synergies might serve to more accurately and efficiently explain human grasping kinematics than is possible with linear analyses. We analyzed motion capture data acquired from the hands of individuals as they grasped an array of common objects, using four of the most widely used linear and nonlinear dimensionality reduction algorithms. We compared the results using a recently developed algorithm-agnostic quality measure, which enabled us to assess the quality of the dimensional reductions that resulted by assessing the extent to which local neighborhood information in the data was preserved. Although qualitative inspection of this data suggested that nonlinear correlations between kinematic variables were present, we found that linear modeling, in the form of Principle Components Analysis, could perform better than any of the nonlinear techniques we applied.

  3. The receiver operational characteristic for binary classification with multiple indices and its application to the neuroimaging study of Alzheimer's disease.

    PubMed

    Wu, Xia; Li, Juan; Ayutyanont, Napatkamon; Protas, Hillary; Jagust, William; Fleisher, Adam; Reiman, Eric; Yao, Li; Chen, Kewei

    2013-01-01

    Given a single index, the receiver operational characteristic (ROC) curve analysis is routinely utilized for characterizing performances in distinguishing two conditions/groups in terms of sensitivity and specificity. Given the availability of multiple data sources (referred to as multi-indices), such as multimodal neuroimaging data sets, cognitive tests, and clinical ratings and genomic data in Alzheimer’s disease (AD) studies, the single-index-based ROC underutilizes all available information. For a long time, a number of algorithmic/analytic approaches combining multiple indices have been widely used to simultaneously incorporate multiple sources. In this study, we propose an alternative for combining multiple indices using logical operations, such as “AND,” “OR,” and “at least n” (where n is an integer), to construct multivariate ROC (multiV-ROC) and characterize the sensitivity and specificity statistically associated with the use of multiple indices. With and without the “leave-one-out” cross-validation, we used two data sets from AD studies to showcase the potentially increased sensitivity/specificity of the multiV-ROC in comparison to the single-index ROC and linear discriminant analysis (an analytic way of combining multi-indices). We conclude that, for the data sets we investigated, the proposed multiV-ROC approach is capable of providing a natural and practical alternative with improved classification accuracy as compared to univariate ROC and linear discriminant analysis.

  4. The Receiver Operational Characteristic for Binary Classification with Multiple Indices and Its Application to the Neuroimaging Study of Alzheimer’s Disease

    PubMed Central

    Wu, Xia; Li, Juan; Ayutyanont, Napatkamon; Protas, Hillary; Jagust, William; Fleisher, Adam; Reiman, Eric; Yao, Li; Chen, Kewei

    2014-01-01

    Given a single index, the receiver operational characteristic (ROC) curve analysis is routinely utilized for characterizing performances in distinguishing two conditions/groups in terms of sensitivity and specificity. Given the availability of multiple data sources (referred to as multi-indices), such as multimodal neuroimaging data sets, cognitive tests, and clinical ratings and genomic data in Alzheimer’s disease (AD) studies, the single-index-based ROC underutilizes all available information. For a long time, a number of algorithmic/analytic approaches combining multiple indices have been widely used to simultaneously incorporate multiple sources. In this study, we propose an alternative for combining multiple indices using logical operations, such as “AND,” “OR,” and “at least n” (where n is an integer), to construct multivariate ROC (multiV-ROC) and characterize the sensitivity and specificity statistically associated with the use of multiple indices. With and without the “leave-one-out” cross-validation, we used two data sets from AD studies to showcase the potentially increased sensitivity/specificity of the multiV-ROC in comparison to the single-index ROC and linear discriminant analysis (an analytic way of combining multi-indices). We conclude that, for the data sets we investigated, the proposed multiV-ROC approach is capable of providing a natural and practical alternative with improved classification accuracy as compared to univariate ROC and linear discriminant analysis. PMID:23702553

  5. Three estimates of the association between linear growth failure and cognitive ability.

    PubMed

    Cheung, Y B; Lam, K F

    2009-09-01

    To compare three estimators of association between growth stunting as measured by height-for-age Z-score and cognitive ability in children, and to examine the extent statistical adjustment for covariates is useful for removing confounding due to socio-economic status. Three estimators, namely random-effects, within- and between-cluster estimators, for panel data were used to estimate the association in a survey of 1105 pairs of siblings who were assessed for anthropometry and cognition. Furthermore, a 'combined' model was formulated to simultaneously provide the within- and between-cluster estimates. Random-effects and between-cluster estimators showed strong association between linear growth and cognitive ability, even after adjustment for a range of socio-economic variables. In contrast, the within-cluster estimator showed a much more modest association: For every increase of one Z-score in linear growth, cognitive ability increased by about 0.08 standard deviation (P < 0.001). The combined model verified that the between-cluster estimate was significantly larger than the within-cluster estimate (P = 0.004). Residual confounding by socio-economic situations may explain a substantial proportion of the observed association between linear growth and cognition in studies that attempt to control the confounding by means of multivariable regression analysis. The within-cluster estimator provides more convincing and modest results about the strength of association.

  6. Local numerical modelling of ultrasonic guided waves in linear and nonlinear media

    NASA Astrophysics Data System (ADS)

    Packo, Pawel; Radecki, Rafal; Kijanka, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz; Leamy, Michael J.

    2017-04-01

    Nonlinear ultrasonic techniques provide improved damage sensitivity compared to linear approaches. The combination of attractive properties of guided waves, such as Lamb waves, with unique features of higher harmonic generation provides great potential for characterization of incipient damage, particularly in plate-like structures. Nonlinear ultrasonic structural health monitoring techniques use interrogation signals at frequencies other than the excitation frequency to detect changes in structural integrity. Signal processing techniques used in non-destructive evaluation are frequently supported by modeling and numerical simulations in order to facilitate problem solution. This paper discusses known and newly-developed local computational strategies for simulating elastic waves, and attempts characterization of their numerical properties in the context of linear and nonlinear media. A hybrid numerical approach combining advantages of the Local Interaction Simulation Approach (LISA) and Cellular Automata for Elastodynamics (CAFE) is proposed for unique treatment of arbitrary strain-stress relations. The iteration equations of the method are derived directly from physical principles employing stress and displacement continuity, leading to an accurate description of the propagation in arbitrarily complex media. Numerical analysis of guided wave propagation, based on the newly developed hybrid approach, is presented and discussed in the paper for linear and nonlinear media. Comparisons to Finite Elements (FE) are also discussed.

  7. Nonlinear aeroservoelastic analysis of a controlled multiple-actuated-wing model with free-play

    NASA Astrophysics Data System (ADS)

    Huang, Rui; Hu, Haiyan; Zhao, Yonghui

    2013-10-01

    In this paper, the effects of structural nonlinearity due to free-play in both leading-edge and trailing-edge outboard control surfaces on the linear flutter control system are analyzed for an aeroelastic model of three-dimensional multiple-actuated-wing. The free-play nonlinearities in the control surfaces are modeled theoretically by using the fictitious mass approach. The nonlinear aeroelastic equations of the presented model can be divided into nine sub-linear modal-based aeroelastic equations according to the different combinations of deflections of the leading-edge and trailing-edge outboard control surfaces. The nonlinear aeroelastic responses can be computed based on these sub-linear aeroelastic systems. To demonstrate the effects of nonlinearity on the linear flutter control system, a single-input and single-output controller and a multi-input and multi-output controller are designed based on the unconstrained optimization techniques. The numerical results indicate that the free-play nonlinearity can lead to either limit cycle oscillations or divergent motions when the linear control system is implemented.

  8. Differential Lipid Profiles of Normal Human Brain Matter and Gliomas by Positive and Negative Mode Desorption Electrospray Ionization – Mass Spectrometry Imaging

    PubMed Central

    Pirro, Valentina; Hattab, Eyas M.; Cohen-Gadol, Aaron A.; Cooks, R. Graham

    2016-01-01

    Desorption electrospray ionization—mass spectrometry (DESI-MS) imaging was used to analyze unmodified human brain tissue sections from 39 subjects sequentially in the positive and negative ionization modes. Acquisition of both MS polarities allowed more complete analysis of the human brain tumor lipidome as some phospholipids ionize preferentially in the positive and others in the negative ion mode. Normal brain parenchyma, comprised of grey matter and white matter, was differentiated from glioma using positive and negative ion mode DESI-MS lipid profiles with the aid of principal component analysis along with linear discriminant analysis. Principal component–linear discriminant analyses of the positive mode lipid profiles was able to distinguish grey matter, white matter, and glioma with an average sensitivity of 93.2% and specificity of 96.6%, while the negative mode lipid profiles had an average sensitivity of 94.1% and specificity of 97.4%. The positive and negative mode lipid profiles provided complementary information. Principal component–linear discriminant analysis of the combined positive and negative mode lipid profiles, via data fusion, resulted in approximately the same average sensitivity (94.7%) and specificity (97.6%) of the positive and negative modes when used individually. However, they complemented each other by improving the sensitivity and specificity of all classes (grey matter, white matter, and glioma) beyond 90% when used in combination. Further principal component analysis using the fused data resulted in the subgrouping of glioma into two groups associated with grey and white matter, respectively, a separation not apparent in the principal component analysis scores plots of the separate positive and negative mode data. The interrelationship of tumor cell percentage and the lipid profiles is discussed, and how such a measure could be used to measure residual tumor at surgical margins. PMID:27658243

  9. Experimental and numerical analysis of pre-compressed masonry walls in two-way-bending with second order effects

    NASA Astrophysics Data System (ADS)

    Milani, Gabriele; Olivito, Renato S.; Tralli, Antonio

    2014-10-01

    The buckling behavior of slender unreinforced masonry (URM) walls subjected to axial compression and out-of-plane lateral loads is investigated through a combined experimental and numerical homogenizedapproach. After a preliminary analysis performed on a unit cell meshed by means of elastic FEs and non-linear interfaces, macroscopic moment-curvature diagrams so obtained are implemented at a structural level, discretizing masonry by means of rigid triangular elements and non-linear interfaces. The non-linear incremental response of the structure is accounted for a specific quadratic programming routine. In parallel, a wide experimental campaign is conducted on walls in two way bending, with the double aim of both validating the numerical model and investigating the behavior of walls that may not be reduced to simple cantilevers or simply supported beams. Panels investigated are dry-joint in scale square walls simply supported at the base and on a vertical edge, exhibiting the classical Rondelet's mechanism. The results obtained are compared with those provided by the numerical model.

  10. Higher symmetries and exact solutions of linear and nonlinear Schr{umlt o}dinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fushchych, W.I.; Nikitin, A.G.

    1997-11-01

    A new approach for the analysis of partial differential equations is developed which is characterized by a simultaneous use of higher and conditional symmetries. Higher symmetries of the Schr{umlt o}dinger equation with an arbitrary potential are investigated. Nonlinear determining equations for potentials are solved using reductions to Weierstrass, Painlev{acute e}, and Riccati forms. Algebraic properties of higher order symmetry operators are analyzed. Combinations of higher and conditional symmetries are used to generate families of exact solutions of linear and nonlinear Schr{umlt o}dinger equations. {copyright} {ital 1997 American Institute of Physics.}

  11. Simple Procedure to Compute the Inductance of a Toroidal Ferrite Core from the Linear to the Saturation Regions

    PubMed Central

    Salas, Rosa Ana; Pleite, Jorge

    2013-01-01

    We propose a specific procedure to compute the inductance of a toroidal ferrite core as a function of the excitation current. The study includes the linear, intermediate and saturation regions. The procedure combines the use of Finite Element Analysis in 2D and experimental measurements. Through the two dimensional (2D) procedure we are able to achieve convergence, a reduction of computational cost and equivalent results to those computed by three dimensional (3D) simulations. The validation is carried out by comparing 2D, 3D and experimental results. PMID:28809283

  12. Profiling of barrier capacitance and spreading resistance using a transient linearly increasing voltage technique.

    PubMed

    Gaubas, E; Ceponis, T; Kusakovskij, J

    2011-08-01

    A technique for the combined measurement of barrier capacitance and spreading resistance profiles using a linearly increasing voltage pulse is presented. The technique is based on the measurement and analysis of current transients, due to the barrier and diffusion capacitance, and the spreading resistance, between a needle probe and sample. To control the impact of deep traps in the barrier capacitance, a steady state bias illumination with infrared light was employed. Measurements of the spreading resistance and barrier capacitance profiles using a stepwise positioned probe on cross sectioned silicon pin diodes and pnp structures are presented.

  13. Application of a transonic potential flow code to the static aeroelastic analysis of three-dimensional wings

    NASA Technical Reports Server (NTRS)

    Whitlow, W., Jr.; Bennett, R. M.

    1982-01-01

    Since the aerodynamic theory is nonlinear, the method requires the coupling of two iterative processes - an aerodynamic analysis and a structural analysis. A full potential analysis code, FLO22, is combined with a linear structural analysis to yield aerodynamic load distributions on and deflections of elastic wings. This method was used to analyze an aeroelastically-scaled wind tunnel model of a proposed executive-jet transport wing and an aeroelastic research wing. The results are compared with the corresponding rigid-wing analyses, and some effects of elasticity on the aerodynamic loading are noted.

  14. Esophageal cancer detection based on tissue surface-enhanced Raman spectroscopy and multivariate analysis

    NASA Astrophysics Data System (ADS)

    Feng, Shangyuan; Lin, Juqiang; Huang, Zufang; Chen, Guannan; Chen, Weisheng; Wang, Yue; Chen, Rong; Zeng, Haishan

    2013-01-01

    The capability of using silver nanoparticle based near-infrared surface enhanced Raman scattering (SERS) spectroscopy combined with principal component analysis (PCA) and linear discriminate analysis (LDA) to differentiate esophageal cancer tissue from normal tissue was presented. Significant differences in Raman intensities of prominent SERS bands were observed between normal and cancer tissues. PCA-LDA multivariate analysis of the measured tissue SERS spectra achieved diagnostic sensitivity of 90.9% and specificity of 97.8%. This exploratory study demonstrated great potential for developing label-free tissue SERS analysis into a clinical tool for esophageal cancer detection.

  15. Classification of emotional states from electrocardiogram signals: a non-linear approach based on hurst

    PubMed Central

    2013-01-01

    Background Identifying the emotional state is helpful in applications involving patients with autism and other intellectual disabilities; computer-based training, human computer interaction etc. Electrocardiogram (ECG) signals, being an activity of the autonomous nervous system (ANS), reflect the underlying true emotional state of a person. However, the performance of various methods developed so far lacks accuracy, and more robust methods need to be developed to identify the emotional pattern associated with ECG signals. Methods Emotional ECG data was obtained from sixty participants by inducing the six basic emotional states (happiness, sadness, fear, disgust, surprise and neutral) using audio-visual stimuli. The non-linear feature ‘Hurst’ was computed using Rescaled Range Statistics (RRS) and Finite Variance Scaling (FVS) methods. New Hurst features were proposed by combining the existing RRS and FVS methods with Higher Order Statistics (HOS). The features were then classified using four classifiers – Bayesian Classifier, Regression Tree, K- nearest neighbor and Fuzzy K-nearest neighbor. Seventy percent of the features were used for training and thirty percent for testing the algorithm. Results Analysis of Variance (ANOVA) conveyed that Hurst and the proposed features were statistically significant (p < 0.001). Hurst computed using RRS and FVS methods showed similar classification accuracy. The features obtained by combining FVS and HOS performed better with a maximum accuracy of 92.87% and 76.45% for classifying the six emotional states using random and subject independent validation respectively. Conclusions The results indicate that the combination of non-linear analysis and HOS tend to capture the finer emotional changes that can be seen in healthy ECG data. This work can be further fine tuned to develop a real time system. PMID:23680041

  16. Linear Combinations of Multiple Outcome Measures to Improve the Power of Efficacy Analysis ---Application to Clinical Trials on Early Stage Alzheimer Disease

    PubMed Central

    Xiong, Chengjie; Luo, Jingqin; Morris, John C; Bateman, Randall

    2018-01-01

    Modern clinical trials on Alzheimer disease (AD) focus on the early symptomatic stage or even the preclinical stage. Subtle disease progression at the early stages, however, poses a major challenge in designing such clinical trials. We propose a multivariate mixed model on repeated measures to model the disease progression over time on multiple efficacy outcomes, and derive the optimum weights to combine multiple outcome measures by minimizing the sample sizes to adequately power the clinical trials. A cross-validation simulation study is conducted to assess the accuracy for the estimated weights as well as the improvement in reducing the sample sizes for such trials. The proposed methodology is applied to the multiple cognitive tests from the ongoing observational study of the Dominantly Inherited Alzheimer Network (DIAN) to power future clinical trials in the DIAN with a cognitive endpoint. Our results show that the optimum weights to combine multiple outcome measures can be accurately estimated, and that compared to the individual outcomes, the combined efficacy outcome with these weights significantly reduces the sample size required to adequately power clinical trials. When applied to the clinical trial in the DIAN, the estimated linear combination of six cognitive tests can adequately power the clinical trial. PMID:29546251

  17. Integrated Modeling Activities for the James Webb Space Telescope: Optical Jitter Analysis

    NASA Technical Reports Server (NTRS)

    Hyde, T. Tupper; Ha, Kong Q.; Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.

    2004-01-01

    This is a continuation of a series of papers on the integrated modeling activities for the James Webb Space Telescope(JWST). Starting with the linear optical model discussed in part one, and using the optical sensitivities developed in part two, we now assess the optical image motion and wavefront errors from the structural dynamics. This is often referred to as "jitter: analysis. The optical model is combined with the structural model and the control models to create a linear structural/optical/control model. The largest jitter is due to spacecraft reaction wheel assembly disturbances which are harmonic in nature and will excite spacecraft and telescope structural. The structural/optic response causes image quality degradation due to image motion (centroid error) as well as dynamic wavefront error. Jitter analysis results are used to predict imaging performance, improve the structural design, and evaluate the operational impact of the disturbance sources.

  18. Visualization of Global Sensitivity Analysis Results Based on a Combination of Linearly Dependent and Independent Directions

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.

  19. Robust, nonlinear, high angle-of-attack control design for a supermaneuverable vehicle

    NASA Technical Reports Server (NTRS)

    Adams, Richard J.

    1993-01-01

    High angle-of-attack flight control laws are developed for a supermaneuverable fighter aircraft. The methods of dynamic inversion and structured singular value synthesis are combined into an approach which addresses both the nonlinearity and robustness problems of flight at extreme operating conditions. The primary purpose of the dynamic inversion control elements is to linearize the vehicle response across the flight envelope. Structured singular value synthesis is used to design a dynamic controller which provides robust tracking to pilot commands. The resulting control system achieves desired flying qualities and guarantees a large margin of robustness to uncertainties for high angle-of-attack flight conditions. The results of linear simulation and structured singular value stability analysis are presented to demonstrate satisfaction of the design criteria. High fidelity nonlinear simulation results show that the combined dynamics inversion/structured singular value synthesis control law achieves a high level of performance in a realistic environment.

  20. Incremental harmonic balance method for predicting amplitudes of a multi-d.o.f. non-linear wheel shimmy system with combined Coulomb and quadratic damping

    NASA Astrophysics Data System (ADS)

    Zhou, J. X.; Zhang, L.

    2005-01-01

    Incremental harmonic balance (IHB) formulations are derived for general multiple degrees of freedom (d.o.f.) non-linear autonomous systems. These formulations are developed for a concerned four-d.o.f. aircraft wheel shimmy system with combined Coulomb and velocity-squared damping. A multi-harmonic analysis is performed and amplitudes of limit cycles are predicted. Within a large range of parametric variations with respect to aircraft taxi velocity, the IHB method can, at a much cheaper cost, give results with high accuracy as compared with numerical results given by a parametric continuation method. In particular, the IHB method avoids the stiff problems emanating from numerical treatment of aircraft wheel shimmy system equations. The development is applicable to other vibration control systems that include commonly used dry friction devices or velocity-squared hydraulic dampers.

  1. Blocked Force and Loading Calculations for LaRC THUNDER Actuators

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.

    2007-01-01

    An analytic approach is developed to predict the performance of LaRC Thunder actuators under load and under blocked conditions. The problem is treated with the Von Karman non-linear analysis combined with a simple Raleigh-Ritz calculation. From this, shape and displacement under load combined with voltage are calculated. A method is found to calculate the blocked force vs voltage and spring force vs distance. It is found that under certain conditions, the blocked force and displacement is almost linear with voltage. It is also found that the spring force is multivalued and has at least one bifurcation point. This bifurcation point is where the device collapses under load and locks to a different bending solution. This occurs at a particular critical load. It is shown this other bending solution has a reduced amplitude and is proportional to the original amplitude times the square of the aspect ratio.

  2. Otoacoustic emissions in the general adult population of Nord-Trøndelag, Norway: III. Relationships with pure-tone hearing thresholds.

    PubMed

    Engdahl, Bo; Tambs, Kristian; Borchgrevink, Hans M; Hoffman, Howard J

    2005-01-01

    This study aims to describe the association between otoacoustic emissions (OAEs) and pure-tone hearing thresholds (PTTs) in an unscreened adult population (N =6415), to determine the efficiency by which TEOAEs and DPOAEs can identify ears with elevated PTTs, and to investigate whether a combination of DPOAE and TEOAE responses improves this performance. Associations were examined by linear regression analysis and ANOVA. Test performance was assessed by receiver operator characteristic (ROC) curves. The relation between OAEs and PTTs appeared curvilinear with a moderate degree of non-linearity. Combining DPOAEs and TEOAEs improved performance. Test performance depended on the cut-off thresholds defining elevated PTTs with optimal values between 25 and 45 dB HL, depending on frequency and type of OAE measure. The unique constitution of the present large sample, which reflects the general adult population, makes these results applicable to population-based studies and screening programs.

  3. Aether: leveraging linear programming for optimal cloud computing in genomics.

    PubMed

    Luber, Jacob M; Tierney, Braden T; Cofer, Evan M; Patel, Chirag J; Kostic, Aleksandar D

    2018-05-01

    Across biology, we are seeing rapid developments in scale of data production without a corresponding increase in data analysis capabilities. Here, we present Aether (http://aether.kosticlab.org), an intuitive, easy-to-use, cost-effective and scalable framework that uses linear programming to optimally bid on and deploy combinations of underutilized cloud computing resources. Our approach simultaneously minimizes the cost of data analysis and provides an easy transition from users' existing HPC pipelines. Data utilized are available at https://pubs.broadinstitute.org/diabimmune and with EBI SRA accession ERP005989. Source code is available at (https://github.com/kosticlab/aether). Examples, documentation and a tutorial are available at http://aether.kosticlab.org. chirag_patel@hms.harvard.edu or aleksandar.kostic@joslin.harvard.edu. Supplementary data are available at Bioinformatics online.

  4. Restoring Low Sidelobe Antenna Patterns with Failed Elements in a Phased Array Antenna

    DTIC Science & Technology

    2016-02-01

    optimum low sidelobes are demonstrated in several examples. Index Terms — Array signal processing, beams, linear algebra , phased arrays, shaped...represented by a linear combination of low sidelobe beamformers with no failed elements, ’s, in a neighborhood around under the constraint that the linear ...would expect that linear combinations of them in a neighborhood around would also have low sidelobes. The algorithms in this paper exploit this

  5. Non-linear analysis of wave progagation using transform methods and plates and shells using integral equations

    NASA Astrophysics Data System (ADS)

    Pipkins, Daniel Scott

    Two diverse topics of relevance in modern computational mechanics are treated. The first involves the modeling of linear and non-linear wave propagation in flexible, lattice structures. The technique used combines the Laplace Transform with the Finite Element Method (FEM). The procedure is to transform the governing differential equations and boundary conditions into the transform domain where the FEM formulation is carried out. For linear problems, the transformed differential equations can be solved exactly, hence the method is exact. As a result, each member of the lattice structure is modeled using only one element. In the non-linear problem, the method is no longer exact. The approximation introduced is a spatial discretization of the transformed non-linear terms. The non-linear terms are represented in the transform domain by making use of the complex convolution theorem. A weak formulation of the resulting transformed non-linear equations yields a set of element level matrix equations. The trial and test functions used in the weak formulation correspond to the exact solution of the linear part of the transformed governing differential equation. Numerical results are presented for both linear and non-linear systems. The linear systems modeled are longitudinal and torsional rods and Bernoulli-Euler and Timoshenko beams. For non-linear systems, a viscoelastic rod and Von Karman type beam are modeled. The second topic is the analysis of plates and shallow shells under-going finite deflections by the Field/Boundary Element Method. Numerical results are presented for two plate problems. The first is the bifurcation problem associated with a square plate having free boundaries which is loaded by four, self equilibrating corner forces. The results are compared to two existing numerical solutions of the problem which differ substantially.

  6. Embedding of multidimensional time-dependent observations.

    PubMed

    Barnard, J P; Aldrich, C; Gerber, M

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  7. Embedding of multidimensional time-dependent observations

    NASA Astrophysics Data System (ADS)

    Barnard, Jakobus P.; Aldrich, Chris; Gerber, Marius

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  8. Understanding Individual-Level Change through the Basis Functions of a Latent Curve Model

    ERIC Educational Resources Information Center

    Blozis, Shelley A.; Harring, Jeffrey R.

    2017-01-01

    Latent curve models have become a popular approach to the analysis of longitudinal data. At the individual level, the model expresses an individual's response as a linear combination of what are called "basis functions" that are common to all members of a population and weights that may vary among individuals. This article uses…

  9. Collision of a Ball with a Barbell and Related Impulse Problems

    ERIC Educational Resources Information Center

    Mungan, Carl E.

    2007-01-01

    The collision of a ball with the end of a barbell illustrates the combined conservation laws of linear and angular momentum. This paper considers the instructive but unfamiliar case where the ball's incident direction of travel makes an acute angle with the barbell's connecting rod. The analysis uses the coefficient of restitution generalized to…

  10. Performance of Nonlinear Finite-Difference Poisson-Boltzmann Solvers

    PubMed Central

    Cai, Qin; Hsieh, Meng-Juei; Wang, Jun; Luo, Ray

    2014-01-01

    We implemented and optimized seven finite-difference solvers for the full nonlinear Poisson-Boltzmann equation in biomolecular applications, including four relaxation methods, one conjugate gradient method, and two inexact Newton methods. The performance of the seven solvers was extensively evaluated with a large number of nucleic acids and proteins. Worth noting is the inexact Newton method in our analysis. We investigated the role of linear solvers in its performance by incorporating the incomplete Cholesky conjugate gradient and the geometric multigrid into its inner linear loop. We tailored and optimized both linear solvers for faster convergence rate. In addition, we explored strategies to optimize the successive over-relaxation method to reduce its convergence failures without too much sacrifice in its convergence rate. Specifically we attempted to adaptively change the relaxation parameter and to utilize the damping strategy from the inexact Newton method to improve the successive over-relaxation method. Our analysis shows that the nonlinear methods accompanied with a functional-assisted strategy, such as the conjugate gradient method and the inexact Newton method, can guarantee convergence in the tested molecules. Especially the inexact Newton method exhibits impressive performance when it is combined with highly efficient linear solvers that are tailored for its special requirement. PMID:24723843

  11. Determination of optimum values for maximizing the profit in bread production: Daily bakery Sdn Bhd

    NASA Astrophysics Data System (ADS)

    Muda, Nora; Sim, Raymond

    2015-02-01

    An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints (other than the integer constraints) are linear. An ILP has many applications in industrial production, including job-shop modelling. A possible objective is to maximize the total production, without exceeding the available resources. In some cases, this can be expressed in terms of a linear program, but variables must be constrained to be integer. It concerned with the optimization of a linear function while satisfying a set of linear equality and inequality constraints and restrictions. It has been used to solve optimization problem in many industries area such as banking, nutrition, agriculture, and bakery and so on. The main purpose of this study is to formulate the best combination of all ingredients in producing different type of bread in Daily Bakery in order to gain maximum profit. This study also focuses on the sensitivity analysis due to changing of the profit and the cost of each ingredient. The optimum result obtained from QM software is RM 65,377.29 per day. This study will be benefited for Daily Bakery and also other similar industries. By formulating a combination of all ingredients make up, they can easily know their total profit in producing bread everyday.

  12. Linear combination methods to improve diagnostic/prognostic accuracy on future observations

    PubMed Central

    Kang, Le; Liu, Aiyi; Tian, Lili

    2014-01-01

    Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714

  13. Score-moment combined linear discrimination analysis (SMC-LDA) as an improved discrimination method.

    PubMed

    Han, Jintae; Chung, Hoeil; Han, Sung-Hwan; Yoon, Moon-Young

    2007-01-01

    A new discrimination method called the score-moment combined linear discrimination analysis (SMC-LDA) has been developed and its performance has been evaluated using three practical spectroscopic datasets. The key concept of SMC-LDA was to use not only the score from principal component analysis (PCA), but also the moment of the spectrum, as inputs for LDA to improve discrimination. Along with conventional score, moment is used in spectroscopic fields as an effective alternative for spectral feature representation. Three different approaches were considered. Initially, the score generated from PCA was projected onto a two-dimensional feature space by maximizing Fisher's criterion function (conventional PCA-LDA). Next, the same procedure was performed using only moment. Finally, both score and moment were utilized simultaneously for LDA. To evaluate discrimination performances, three different spectroscopic datasets were employed: (1) infrared (IR) spectra of normal and malignant stomach tissue, (2) near-infrared (NIR) spectra of diesel and light gas oil (LGO) and (3) Raman spectra of Chinese and Korean ginseng. For each case, the best discrimination results were achieved when both score and moment were used for LDA (SMC-LDA). Since the spectral representation character of moment was different from that of score, inclusion of both score and moment for LDA provided more diversified and descriptive information.

  14. Component-based subspace linear discriminant analysis method for face recognition with one training sample

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.

    2005-05-01

    Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.

  15. Relations between basic and specific motor abilities and player quality of young basketball players.

    PubMed

    Marić, Kristijan; Katić, Ratko; Jelicić, Mario

    2013-05-01

    Subjects from 5 first league clubs from Herzegovina were tested with the purpose of determining the relations of basic and specific motor abilities, as well as the effect of specific abilities on player efficiency in young basketball players (cadets). A battery of 12 tests assessing basic motor abilities and 5 specific tests assessing basketball efficiency were used on a sample of 83 basketball players. Two significant canonical correlations, i.e. linear combinations explained the relation between the set of twelve variables of basic motor space and five variables of situational motor abilities. Underlying the first canonical linear combination is the positive effect of the general motor factor, predominantly defined by jumping explosive power, movement speed of the arms, static strength of the arms and coordination, on specific basketball abilities: movement efficiency, the power of the overarm throw, shooting and passing precision, and the skill of handling the ball. The impact of basic motor abilities of precision and balance on specific abilities of passing and shooting precision and ball handling is underlying the second linear combination. The results of regression correlation analysis between the variable set of specific motor abilities and game efficiency have shown that the ability of ball handling has the largest impact on player quality in basketball cadets, followed by shooting precision and passing precision, and the power of the overarm throw.

  16. Stability indicating high performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in combined dosage form

    PubMed Central

    Bageshwar, Deepak; Khanvilkar, Vineeta; Kadam, Vilasrao

    2011-01-01

    A specific, precise and stability indicating high-performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in pharmaceutical formulations was developed and validated. The method employed TLC aluminium plates precoated with silica gel 60F254 as the stationary phase. The solvent system consisted of methanol:water:ammonium acetate; 4.0:1.0:0.5 (v/v/v). This system was found to give compact and dense spots for both itopride hydrochloride (Rf value of 0.55±0.02) and pantoprazole sodium (Rf value of 0.85±0.04). Densitometric analysis of both drugs was carried out in the reflectance–absorbance mode at 289 nm. The linear regression analysis data for the calibration plots showed a good linear relationship with R2=0.9988±0.0012 in the concentration range of 100–400 ng for pantoprazole sodium. Also, the linear regression analysis data for the calibration plots showed a good linear relationship with R2=0.9990±0.0008 in the concentration range of 200–1200 ng for itopride hydrochloride. The method was validated for specificity, precision, robustness and recovery. Statistical analysis proves that the method is repeatable and selective for the estimation of both the said drugs. As the method could effectively separate the drug from its degradation products, it can be employed as a stability indicating method. PMID:29403710

  17. Stability indicating high performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in combined dosage form.

    PubMed

    Bageshwar, Deepak; Khanvilkar, Vineeta; Kadam, Vilasrao

    2011-11-01

    A specific, precise and stability indicating high-performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in pharmaceutical formulations was developed and validated. The method employed TLC aluminium plates precoated with silica gel 60F 254 as the stationary phase. The solvent system consisted of methanol:water:ammonium acetate; 4.0:1.0:0.5 (v/v/v). This system was found to give compact and dense spots for both itopride hydrochloride ( R f value of 0.55±0.02) and pantoprazole sodium ( R f value of 0.85±0.04). Densitometric analysis of both drugs was carried out in the reflectance-absorbance mode at 289 nm. The linear regression analysis data for the calibration plots showed a good linear relationship with R 2 =0.9988±0.0012 in the concentration range of 100-400 ng for pantoprazole sodium. Also, the linear regression analysis data for the calibration plots showed a good linear relationship with R 2 =0.9990±0.0008 in the concentration range of 200-1200 ng for itopride hydrochloride. The method was validated for specificity, precision, robustness and recovery. Statistical analysis proves that the method is repeatable and selective for the estimation of both the said drugs. As the method could effectively separate the drug from its degradation products, it can be employed as a stability indicating method.

  18. A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia

    NASA Astrophysics Data System (ADS)

    Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.

    2017-08-01

    In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.

  19. An analysis for the sound field produced by rigid wide cord dual rotation propellers of high solidarity in compressible flow

    NASA Technical Reports Server (NTRS)

    Ramachandra, S. M.; Bober, L. J.

    1986-01-01

    An unsteady lifting service theory for the counter-rotating propeller is presented using the linearized governing equations for the acceleration potential and representing the blades by a surface distribution of pulsating acoustic dipoles distributed according to a modified Birnbaum series. The Birnbaum series coefficients are determined by satisfying the surface tangency boundary conditions on the front and rear propeller blades. Expressions for the combined acoustic resonance modes of the front prop, the rear prop and the combination are also given.

  20. Beneficial Combination of Lacosamide with Retigabine in Experimental Animals: An Isobolographic Analysis.

    PubMed

    Luszczki, Jarogniew J; Zagaja, Mirosław; Miziak, Barbara; Kondrat-Wrobel, Maria W; Zaluska, Katarzyna; Wroblewska-Luczka, Paula; Adamczuk, Piotr; Czuczwar, Stanislaw J; Florek-Luszczki, Magdalena

    2018-01-01

    To isobolographically determine the types of interactions that occur between retigabine and lacosamide (LCM; two third-generation antiepileptic drugs) with respect to their anticonvulsant activity and acute adverse effects (sedation) in the maximal electroshock-induced seizures (MES) and chimney test (motor performance) in adult male Swiss mice. Type I isobolographic analysis for nonparallel dose-response effects for the combination of retigabine with LCM (at the fixed-ratio of 1:1) in both the MES and chimney test in mice was performed. Brain concentrations of retigabine and LCM were measured by high-pressure liquid chromatography (HPLC) to characterize any pharmacokinetic interactions occurring when combining these drugs. Linear regression analysis revealed that retigabine had its dose-response effect line nonparallel to that of LCM in both the MES and chimney tests. The type I isobolographic analysis illustrated that retigabine combined with LCM (fixed-ratio of 1:1) exerted an additive interaction in the mouse MES model and sub-additivity (antagonism) in the chimney test. With HPLC, retigabine and LCM did not mutually change their total brain concentrations, thereby confirming the pharmacodynamic nature of the interaction. LCM combined with retigabine possesses a beneficial preclinical profile (benefit index ranged from 2.07 to 2.50) and this 2-drug combination is worth recommending as treatment plan to patients with pharmacoresistant epilepsy. © 2017 S. Karger AG, Basel.

  1. Determining triple gauge boson couplings from Higgs data.

    PubMed

    Corbett, Tyler; Éboli, O J P; Gonzalez-Fraile, J; Gonzalez-Garcia, M C

    2013-07-05

    In the framework of effective Lagrangians with the SU(2)(L)×U(1)(Y) symmetry linearly realized, modifications of the couplings of the Higgs field to the electroweak gauge bosons are related to anomalous triple gauge couplings (TGCs). Here, we show that the analysis of the latest Higgs boson production data at the LHC and Tevatron give rise to strong bounds on TGCs that are complementary to those from direct TGC analysis. We present the constraints on TGCs obtained by combining all available data on direct TGC studies and on Higgs production analysis.

  2. Advanced complex trait analysis.

    PubMed

    Gray, A; Stewart, I; Tenesa, A

    2012-12-01

    The Genome-wide Complex Trait Analysis (GCTA) software package can quantify the contribution of genetic variation to phenotypic variation for complex traits. However, as those datasets of interest continue to increase in size, GCTA becomes increasingly computationally prohibitive. We present an adapted version, Advanced Complex Trait Analysis (ACTA), demonstrating dramatically improved performance. We restructure the genetic relationship matrix (GRM) estimation phase of the code and introduce the highly optimized parallel Basic Linear Algebra Subprograms (BLAS) library combined with manual parallelization and optimization. We introduce the Linear Algebra PACKage (LAPACK) library into the restricted maximum likelihood (REML) analysis stage. For a test case with 8999 individuals and 279,435 single nucleotide polymorphisms (SNPs), we reduce the total runtime, using a compute node with two multi-core Intel Nehalem CPUs, from ∼17 h to ∼11 min. The source code is fully available under the GNU Public License, along with Linux binaries. For more information see http://www.epcc.ed.ac.uk/software-products/acta. a.gray@ed.ac.uk Supplementary data are available at Bioinformatics online.

  3. Combining fixed effects and instrumental variable approaches for estimating the effect of psychosocial job quality on mental health: evidence from 13 waves of a nationally representative cohort study.

    PubMed

    Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Pega, Frank; Petrie, Dennis

    2017-06-23

    Previous studies suggest that poor psychosocial job quality is a risk factor for mental health problems, but they use conventional regression analytic methods that cannot rule out reverse causation, unmeasured time-invariant confounding and reporting bias. This study combines two quasi-experimental approaches to improve causal inference by better accounting for these biases: (i) linear fixed effects regression analysis and (ii) linear instrumental variable analysis. We extract 13 annual waves of national cohort data including 13 260 working-age (18-64 years) employees. The exposure variable is self-reported level of psychosocial job quality. The instruments used are two common workplace entitlements. The outcome variable is the Mental Health Inventory (MHI-5). We adjust for measured time-varying confounders. In the fixed effects regression analysis adjusted for time-varying confounders, a 1-point increase in psychosocial job quality is associated with a 1.28-point improvement in mental health on the MHI-5 scale (95% CI: 1.17, 1.40; P < 0.001). When the fixed effects was combined with the instrumental variable analysis, a 1-point increase psychosocial job quality is related to 1.62-point improvement on the MHI-5 scale (95% CI: -0.24, 3.48; P = 0.088). Our quasi-experimental results provide evidence to confirm job stressors as risk factors for mental ill health using methods that improve causal inference. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  4. Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model.

    PubMed

    Musekiwa, Alfred; Manda, Samuel O M; Mwambi, Henry G; Chen, Ding-Geng

    2016-01-01

    Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results.

  5. Fully automated screening of veterinary drugs in milk by turbulent flow chromatography and tandem mass spectrometry

    PubMed Central

    Stolker, Alida A. M.; Peters, Ruud J. B.; Zuiderent, Richard; DiBussolo, Joseph M.

    2010-01-01

    There is an increasing interest in screening methods for quick and sensitive analysis of various classes of veterinary drugs with limited sample pre-treatment. Turbulent flow chromatography in combination with tandem mass spectrometry has been applied for the first time as an efficient screening method in routine analysis of milk samples. Eight veterinary drugs, belonging to seven different classes were selected for this study. After developing and optimising the method, parameters such as linearity, repeatability, matrix effects and carry-over were studied. The screening method was then tested in the routine analysis of 12 raw milk samples. Even without internal standards, the linearity of the method was found to be good in the concentration range of 50 to 500 µg/L. Regarding repeatability, RSDs below 12% were obtained for all analytes, with only a few exceptions. The limits of detection were between 0.1 and 5.2 µg/L, far below the maximum residue levels for milk set by the EU regulations. While matrix effects—ion suppression or enhancement—are obtained for all the analytes the method has proved to be useful for screening purposes because of its sensitivity, linearity and repeatability. Furthermore, when performing the routine analysis of the raw milk samples, no false positive or negative results were obtained. PMID:20379812

  6. Quantitative determination of multi markers in five varieties of Withania somnifera using ultra-high performance liquid chromatography with hybrid triple quadrupole linear ion trap mass spectrometer combined with multivariate analysis: Application to pharmaceutical dosage forms.

    PubMed

    Chandra, Preeti; Kannujia, Rekha; Saxena, Ankita; Srivastava, Mukesh; Bahadur, Lal; Pal, Mahesh; Singh, Bhim Pratap; Kumar Ojha, Sanjeev; Kumar, Brijesh

    2016-09-10

    An ultra-high performance liquid chromatography electrospray ionization tandem mass spectrometry method has been developed and validated for simultaneous quantification of six major bioactive compounds in five varieties of Withania somnifera in various plant parts (leaf, stem and root). The analysis was accomplished on Waters ACQUITY UPLC BEH C18 column with linear gradient elution of water/formic acid (0.1%) and acetonitrile at a flow rate of 0.3mLmin(-1). The proposed method was validated with acceptable linearity (r(2), 0.9989-0.9998), precision (RSD, 0.16-2.01%), stability (RSD, 1.04-1.62%) and recovery (RSD ≤2.45%), under optimum conditions. The method was also successfully applied for the simultaneous determination of six marker compounds in twenty-six marketed formulations. Hierarchical cluster analysis and principal component analysis were applied to discriminate these twenty-six batches based on characteristics of the bioactive compounds. The results indicated that this method is advance, rapid, sensitive and suitable to reveal the quality of Withania somnifera and also capable of performing quality evaluation of polyherbal formulations having similar markers/raw herbs. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.

  8. Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models

    PubMed Central

    Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong

    2015-01-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955

  9. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    PubMed

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  10. Assessing the impact of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping

    2014-05-01

    The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.

  11. Quantiles for Finite Mixtures of Normal Distributions

    ERIC Educational Resources Information Center

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  12. Cortical sensorimotor alterations classify clinical phenotype and putative genotype of spasmodic dysphonia.

    PubMed

    Battistella, G; Fuertinger, S; Fleysher, L; Ozelius, L J; Simonyan, K

    2016-10-01

    Spasmodic dysphonia (SD), or laryngeal dystonia, is a task-specific isolated focal dystonia of unknown causes and pathophysiology. Although functional and structural abnormalities have been described in this disorder, the influence of its different clinical phenotypes and genotypes remains scant, making it difficult to explain SD pathophysiology and to identify potential biomarkers. We used a combination of independent component analysis and linear discriminant analysis of resting-state functional magnetic resonance imaging data to investigate brain organization in different SD phenotypes (abductor versus adductor type) and putative genotypes (familial versus sporadic cases) and to characterize neural markers for genotype/phenotype categorization. We found abnormal functional connectivity within sensorimotor and frontoparietal networks in patients with SD compared with healthy individuals as well as phenotype- and genotype-distinct alterations of these networks, involving primary somatosensory, premotor and parietal cortices. The linear discriminant analysis achieved 71% accuracy classifying SD and healthy individuals using connectivity measures in the left inferior parietal and sensorimotor cortices. When categorizing between different forms of SD, the combination of measures from the left inferior parietal, premotor and right sensorimotor cortices achieved 81% discriminatory power between familial and sporadic SD cases, whereas the combination of measures from the right superior parietal, primary somatosensory and premotor cortices led to 71% accuracy in the classification of adductor and abductor SD forms. Our findings present the first effort to identify and categorize isolated focal dystonia based on its brain functional connectivity profile, which may have a potential impact on the future development of biomarkers for this rare disorder. © 2016 EAN.

  13. Cortical sensorimotor alterations classify clinical phenotype and putative genotype of spasmodic dysphonia

    PubMed Central

    Battistella, Giovanni; Fuertinger, Stefan; Fleysher, Lazar; Ozelius, Laurie J.; Simonyan, Kristina

    2017-01-01

    Background Spasmodic dysphonia (SD), or laryngeal dystonia, is a task-specific isolated focal dystonia of unknown causes and pathophysiology. Although functional and structural abnormalities have been described in this disorder, the influence of its different clinical phenotypes and genotypes remains scant, making it difficult to explain SD pathophysiology and to identify potential biomarkers. Methods We used a combination of independent component analysis and linear discriminant analysis of resting-state functional MRI data to investigate brain organization in different SD phenotypes (abductor vs. adductor type) and putative genotypes (familial vs. sporadic cases) and to characterize neural markers for genotype/phenotype categorization. Results We found abnormal functional connectivity within sensorimotor and frontoparietal networks in SD patients compared to healthy individuals as well as phenotype- and genotype-distinct alterations of these networks, involving primary somatosensory, premotor and parietal cortices. The linear discriminant analysis achieved 71% accuracy classifying SD and healthy individuals using connectivity measures in the left inferior parietal and sensorimotor cortex. When categorizing between different forms of SD, the combination of measures from left inferior parietal, premotor and right sensorimotor cortices achieved 81% discriminatory power between familial and sporadic SD cases, whereas the combination of measures from the right superior parietal, primary somatosensory and premotor cortices led to 71% accuracy in the classification of adductor and abductor SD forms. Conclusions Our findings present the first effort to identify and categorize isolated focal dystonia based on its brain functional connectivity profile, which may have a potential impact on the future development of biomarkers for this rare disorder. PMID:27346568

  14. Identifying pleiotropic genes in genome-wide association studies from related subjects using the linear mixed model and Fisher combination function.

    PubMed

    Yang, James J; Williams, L Keoki; Buu, Anne

    2017-08-24

    A multivariate genome-wide association test is proposed for analyzing data on multivariate quantitative phenotypes collected from related subjects. The proposed method is a two-step approach. The first step models the association between the genotype and marginal phenotype using a linear mixed model. The second step uses the correlation between residuals of the linear mixed model to estimate the null distribution of the Fisher combination test statistic. The simulation results show that the proposed method controls the type I error rate and is more powerful than the marginal tests across different population structures (admixed or non-admixed) and relatedness (related or independent). The statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that applying the multivariate association test may facilitate identification of the pleiotropic genes contributing to the risk for alcohol dependence commonly expressed by four correlated phenotypes. This study proposes a multivariate method for identifying pleiotropic genes while adjusting for cryptic relatedness and population structure between subjects. The two-step approach is not only powerful but also computationally efficient even when the number of subjects and the number of phenotypes are both very large.

  15. A distributed lag approach to fitting non-linear dose-response models in particulate matter air pollution time series investigations.

    PubMed

    Roberts, Steven; Martin, Michael A

    2007-06-01

    The majority of studies that have investigated the relationship between particulate matter (PM) air pollution and mortality have assumed a linear dose-response relationship and have used either a single-day's PM or a 2- or 3-day moving average of PM as the measure of PM exposure. Both of these modeling choices have come under scrutiny in the literature, the linear assumption because it does not allow for non-linearities in the dose-response relationship, and the use of the single- or multi-day moving average PM measure because it does not allow for differential PM-mortality effects spread over time. These two problems have been dealt with on a piecemeal basis with non-linear dose-response models used in some studies and distributed lag models (DLMs) used in others. In this paper, we propose a method for investigating the shape of the PM-mortality dose-response relationship that combines a non-linear dose-response model with a DLM. This combined model will be shown to produce satisfactory estimates of the PM-mortality dose-response relationship in situations where non-linear dose response models and DLMs alone do not; that is, the combined model did not systemically underestimate or overestimate the effect of PM on mortality. The combined model is applied to ten cities in the US and a pooled dose-response model formed. When fitted with a change-point value of 60 microg/m(3), the pooled model provides evidence for a positive association between PM and mortality. The combined model produced larger estimates for the effect of PM on mortality than when using a non-linear dose-response model or a DLM in isolation. For the combined model, the estimated percentage increase in mortality for PM concentrations of 25 and 75 microg/m(3) were 3.3% and 5.4%, respectively. In contrast, the corresponding values from a DLM used in isolation were 1.2% and 3.5%, respectively.

  16. Improving the Accuracy of Mapping Urban Vegetation Carbon Density by Combining Shadow Remove, Spectral Unmixing Analysis and Spatial Modeling

    NASA Astrophysics Data System (ADS)

    Qie, G.; Wang, G.; Wang, M.

    2016-12-01

    Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images

  17. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY PERIODICALS, INC.

  18. Clinically Practical Approach for Screening of Low Muscularity Using Electronic Linear Measures on Computed Tomography Images in Critically Ill Patients.

    PubMed

    Avrutin, Egor; Moisey, Lesley L; Zhang, Roselyn; Khattab, Jenna; Todd, Emma; Premji, Tahira; Kozar, Rosemary; Heyland, Daren K; Mourtzakis, Marina

    2017-12-06

    Computed tomography (CT) scans performed during routine hospital care offer the opportunity to quantify skeletal muscle and predict mortality and morbidity in intensive care unit (ICU) patients. Existing methods of muscle cross-sectional area (CSA) quantification require specialized software, training, and time commitment that may not be feasible in a clinical setting. In this article, we explore a new screening method to identify patients with low muscle mass. We analyzed 145 scans of elderly ICU patients (≥65 years old) using a combination of measures obtained with a digital ruler, commonly found on hospital radiological software. The psoas and paraspinal muscle groups at the level of the third lumbar vertebra (L3) were evaluated by using 2 linear measures each and compared with an established method of CT image analysis of total muscle CSA in the L3 region. There was a strong association between linear measures of psoas and paraspinal muscle groups and total L3 muscle CSA (R 2 = 0.745, P < 0.001). Linear measures, age, and sex were included as covariates in a multiple logistic regression to predict those with low muscle mass; receiver operating characteristic (ROC) area under the curve (AUC) of the combined psoas and paraspinal linear index model was 0.920. Intraclass correlation coefficients (ICCs) were used to evaluate intrarater and interrater reliability, resulting in scores of 0.979 (95% CI: 0.940-0.992) and 0.937 (95% CI: 0.828-0.978), respectively. A digital ruler can reliably predict L3 muscle CSA, and these linear measures may be used to identify critically ill patients with low muscularity who are at risk for worse clinical outcomes. © 2017 American Society for Parenteral and Enteral Nutrition.

  19. Using color histograms and SPA-LDA to classify bacteria.

    PubMed

    de Almeida, Valber Elias; da Costa, Gean Bezerra; de Sousa Fernandes, David Douglas; Gonçalves Dias Diniz, Paulo Henrique; Brandão, Deysiane; de Medeiros, Ana Claudia Dantas; Véras, Germano

    2014-09-01

    In this work, a new approach is proposed to verify the differentiating characteristics of five bacteria (Escherichia coli, Enterococcus faecalis, Streptococcus salivarius, Streptococcus oralis, and Staphylococcus aureus) by using digital images obtained with a simple webcam and variable selection by the Successive Projections Algorithm associated with Linear Discriminant Analysis (SPA-LDA). In this sense, color histograms in the red-green-blue (RGB), hue-saturation-value (HSV), and grayscale channels and their combinations were used as input data, and statistically evaluated by using different multivariate classifiers (Soft Independent Modeling by Class Analogy (SIMCA), Principal Component Analysis-Linear Discriminant Analysis (PCA-LDA), Partial Least Squares Discriminant Analysis (PLS-DA) and Successive Projections Algorithm-Linear Discriminant Analysis (SPA-LDA)). The bacteria strains were cultivated in a nutritive blood agar base layer for 24 h by following the Brazilian Pharmacopoeia, maintaining the status of cell growth and the nature of nutrient solutions under the same conditions. The best result in classification was obtained by using RGB and SPA-LDA, which reached 94 and 100 % of classification accuracy in the training and test sets, respectively. This result is extremely positive from the viewpoint of routine clinical analyses, because it avoids bacterial identification based on phenotypic identification of the causative organism using Gram staining, culture, and biochemical proofs. Therefore, the proposed method presents inherent advantages, promoting a simpler, faster, and low-cost alternative for bacterial identification.

  20. Properties of aircraft tire materials

    NASA Technical Reports Server (NTRS)

    Dodge, Richard N.; Clark, Samuel K.

    1988-01-01

    A summary is presented of measured elastomeric composite response suitable for linear structural and thermoelastic analysis in aircraft tires. Both real and loss properties are presented for a variety of operating conditions including the effects of temperature and frequency. Suitable micro-mechanics models are used for predictions of these properties for other material combinations and the applicability of laminate theory is discussed relative to measured values.

  1. Application of Stochastic Learning Theory to Elementary Arithmetic Exercises. Technical Report No. 302. Psychology and Education Series.

    ERIC Educational Resources Information Center

    Wagner, William J.

    The application of a linear learning model, which combines learning theory with a structural analysis of the exercises given to students, to an elementary mathematics curriculum is examined. Elementary arithmetic items taken by about 100 second-grade students on 26 weekly tests form the data base. Weekly predictions of group performance on…

  2. Forecasting Container Throughput at the Doraleh Port in Djibouti through Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Mohamed Ismael, Hawa; Vandyck, George Kobina

    The Doraleh Container Terminal (DCT) located in Djibouti has been noted as the most technologically advanced container terminal on the African continent. DCT's strategic location at the crossroads of the main shipping lanes connecting Asia, Africa and Europe put it in a unique position to provide important shipping services to vessels plying that route. This paper aims to forecast container throughput through the Doraleh Container Port in Djibouti by Time Series Analysis. A selection of univariate forecasting models has been used, namely Triple Exponential Smoothing Model, Grey Model and Linear Regression Model. By utilizing the above three models and their combination, the forecast of container throughput through the Doraleh port was realized. A comparison of the different forecasting results of the three models, in addition to the combination forecast is then undertaken, based on commonly used evaluation criteria Mean Absolute Deviation (MAD) and Mean Absolute Percentage Error (MAPE). The study found that the Linear Regression forecasting Model was the best prediction method for forecasting the container throughput, since its forecast error was the least. Based on the regression model, a ten (10) year forecast for container throughput at DCT has been made.

  3. MEG and fMRI Fusion for Non-Linear Estimation of Neural and BOLD Signal Changes

    PubMed Central

    Plis, Sergey M.; Calhoun, Vince D.; Weisend, Michael P.; Eichele, Tom; Lane, Terran

    2010-01-01

    The combined analysis of magnetoencephalography (MEG)/electroencephalography and functional magnetic resonance imaging (fMRI) measurements can lead to improvement in the description of the dynamical and spatial properties of brain activity. In this paper we empirically demonstrate this improvement using simulated and recorded task related MEG and fMRI activity. Neural activity estimates were derived using a dynamic Bayesian network with continuous real valued parameters by means of a sequential Monte Carlo technique. In synthetic data, we show that MEG and fMRI fusion improves estimation of the indirectly observed neural activity and smooths tracking of the blood oxygenation level dependent (BOLD) response. In recordings of task related neural activity the combination of MEG and fMRI produces a result with greater signal-to-noise ratio, that confirms the expectation arising from the nature of the experiment. The highly non-linear model of the BOLD response poses a difficult inference problem for neural activity estimation; computational requirements are also high due to the time and space complexity. We show that joint analysis of the data improves the system's behavior by stabilizing the differential equations system and by requiring fewer computational resources. PMID:21120141

  4. Aether: leveraging linear programming for optimal cloud computing in genomics

    PubMed Central

    Luber, Jacob M; Tierney, Braden T; Cofer, Evan M; Patel, Chirag J

    2018-01-01

    Abstract Motivation Across biology, we are seeing rapid developments in scale of data production without a corresponding increase in data analysis capabilities. Results Here, we present Aether (http://aether.kosticlab.org), an intuitive, easy-to-use, cost-effective and scalable framework that uses linear programming to optimally bid on and deploy combinations of underutilized cloud computing resources. Our approach simultaneously minimizes the cost of data analysis and provides an easy transition from users’ existing HPC pipelines. Availability and implementation Data utilized are available at https://pubs.broadinstitute.org/diabimmune and with EBI SRA accession ERP005989. Source code is available at (https://github.com/kosticlab/aether). Examples, documentation and a tutorial are available at http://aether.kosticlab.org. Contact chirag_patel@hms.harvard.edu or aleksandar.kostic@joslin.harvard.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:29228186

  5. A dual estimate method for aeromagnetic compensation

    NASA Astrophysics Data System (ADS)

    Ma, Ming; Zhou, Zhijian; Cheng, Defu

    2017-11-01

    Scalar aeromagnetic surveys have played a vital role in prospecting. However, before analysis of the surveys’ aeromagnetic data is possible, the aircraft’s magnetic interference should be removed. The extensively adopted linear model for aeromagnetic compensation is computationally efficient but faces an underfitting problem. On the other hand, the neural model proposed by Williams is more powerful at fitting but always suffers from an overfitting problem. This paper starts off with an analysis of these two models and then proposes a dual estimate method to combine them together to improve accuracy. This method is based on an unscented Kalman filter, but a gradient descent method is implemented over the iteration so that the parameters of the linear model are adjustable during flight. The noise caused by the neural model’s overfitting problem is suppressed by introducing an observation noise.

  6. Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions

    NASA Astrophysics Data System (ADS)

    Wrench, Alan A.

    Available from UMI in association with The British Library. Requires signed TDF. The Linear Prediction model was first applied to speech two and a half decades ago. Since then it has been the subject of intense research and continues to be one of the principal tools in the analysis of speech. Its mathematical tractability makes it a suitable subject for study and its proven success in practical applications makes the study worthwhile. The model is known to be unsuited to speech corrupted by background noise. This has led many researchers to investigate ways of enhancing the speech signal prior to Linear Predictive analysis. In this thesis this body of work is extended. The chosen application is low bit-rate (2.4 kbits/sec) speech coding. For this task the performance of the Linear Prediction algorithm is crucial because there is insufficient bandwidth to encode the error between the modelled speech and the original input. A review of the fundamentals of Linear Prediction and an independent assessment of the relative performance of methods of Linear Prediction modelling are presented. A new method is proposed which is fast and facilitates stability checking, however, its stability is shown to be unacceptably poorer than existing methods. A novel supposition governing the positioning of the analysis frame relative to a voiced speech signal is proposed and supported by observation. The problem of coding noisy speech is examined. Four frequency domain speech processing techniques are developed and tested. These are: (i) Combined Order Linear Prediction Spectral Estimation; (ii) Frequency Scaling According to an Aural Model; (iii) Amplitude Weighting Based on Perceived Loudness; (iv) Power Spectrum Squaring. These methods are compared with the Recursive Linearised Maximum a Posteriori method. Following on from work done in the frequency domain, a time domain implementation of spectrum squaring is developed. In addition, a new method of power spectrum estimation is developed based on the Minimum Variance approach. This new algorithm is shown to be closely related to Linear Prediction but produces slightly broader spectral peaks. Spectrum squaring is applied to both the new algorithm and standard Linear Prediction and their relative performance is assessed. (Abstract shortened by UMI.).

  7. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    NASA Astrophysics Data System (ADS)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  8. Suppression of stimulated Brillouin scattering in optical fibers using a linearly chirped diode laser.

    PubMed

    White, J O; Vasilyev, A; Cahill, J P; Satyan, N; Okusaga, O; Rakuljic, G; Mungan, C E; Yariv, A

    2012-07-02

    The output of high power fiber amplifiers is typically limited by stimulated Brillouin scattering (SBS). An analysis of SBS with a chirped pump laser indicates that a chirp of 2.5 × 10(15) Hz/s could raise, by an order of magnitude, the SBS threshold of a 20-m fiber. A diode laser with a constant output power and a linear chirp of 5 × 10(15) Hz/s has been previously demonstrated. In a low-power proof-of-concept experiment, the threshold for SBS in a 6-km fiber is increased by a factor of 100 with a chirp of 5 × 10(14) Hz/s. A linear chirp will enable straightforward coherent combination of multiple fiber amplifiers, with electronic compensation of path length differences on the order of 0.2 m.

  9. Effects of Initial Geometric Imperfections On the Non-Linear Response of the Space Shuttle Superlightweight Liquid-Oxygen Tank

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.; Young, Richard D.; Collins, Timothy J.; Starnes, James H., Jr.

    2002-01-01

    The results of an analytical study of the elastic buckling and nonlinear behavior of the liquid-oxygen tank for the new Space Shuttle superlightweight external fuel tank are presented. Selected results that illustrate three distinctly different types of non-linear response phenomena for thin-walled shells which are subjected to combined mechanical and thermal loads are presented. These response phenomena consist of a bifurcation-type buckling response, a short-wavelength non-linear bending response and a non-linear collapse or "snap-through" response associated with a limit point. The effects of initial geometric imperfections on the response characteristics are emphasized. The results illustrate that the buckling and non-linear response of a geometrically imperfect shell structure subjected to complex loading conditions may not be adequately characterized by an elastic linear bifurcation buckling analysis, and that the traditional industry practice of applying a buckling-load knock-down factor can result in an ultraconservative design. Results are also presented that show that a fluid-filled shell can be highly sensitive to initial geometric imperfections, and that the use a buckling-load knock-down factor is needed for this case.

  10. Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Volden, Thomas R.

    2010-01-01

    The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.

  11. Numerical simulation of the wave-induced non-linear bending moment of ships

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, J.; Wang, Z.; Gu, X.

    1995-12-31

    Ships traveling in moderate or rough seas may experience non-linear bending moments due to flare effect and slamming loads. The numerical simulation of the total wave-induced bending moment contributed from both the wave frequency component induced by wave forces and the high frequency whipping component induced by slamming actions is very important in predicting the responses and ensuring the safety of the ship in rough seas. The time simulation is also useful for the reliability analysis of ship girder strength. The present paper discusses four different methods of the numerical simulation of wave-induced non-linear vertical bending moment of ships recentlymore » developed in CSSRC, including the hydroelastic integral-differential method (HID), the hydroelastic differential analysis method (HDA), the combined seakeeping and structural forced vibration method (CSFV), and the modified CSFV method (MCSFV). Numerical predictions are compared with the experimental results obtained from the elastic ship model test of S-175 container ship in regular and irregular waves presented by Watanabe Ueno and Sawada (1989).« less

  12. Experimental and numerical analysis of pre-compressed masonry walls in two-way-bending with second order effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milani, Gabriele, E-mail: milani@stru.polimi.it; Olivito, Renato S.; Tralli, Antonio

    2014-10-06

    The buckling behavior of slender unreinforced masonry (URM) walls subjected to axial compression and out-of-plane lateral loads is investigated through a combined experimental and numerical homogenizedapproach. After a preliminary analysis performed on a unit cell meshed by means of elastic FEs and non-linear interfaces, macroscopic moment-curvature diagrams so obtained are implemented at a structural level, discretizing masonry by means of rigid triangular elements and non-linear interfaces. The non-linear incremental response of the structure is accounted for a specific quadratic programming routine. In parallel, a wide experimental campaign is conducted on walls in two way bending, with the double aim ofmore » both validating the numerical model and investigating the behavior of walls that may not be reduced to simple cantilevers or simply supported beams. Panels investigated are dry-joint in scale square walls simply supported at the base and on a vertical edge, exhibiting the classical Rondelet’s mechanism. The results obtained are compared with those provided by the numerical model.« less

  13. Near-infrared confocal micro-Raman spectroscopy combined with PCA-LDA multivariate analysis for detection of esophageal cancer

    NASA Astrophysics Data System (ADS)

    Chen, Long; Wang, Yue; Liu, Nenrong; Lin, Duo; Weng, Cuncheng; Zhang, Jixue; Zhu, Lihuan; Chen, Weisheng; Chen, Rong; Feng, Shangyuan

    2013-06-01

    The diagnostic capability of using tissue intrinsic micro-Raman signals to obtain biochemical information from human esophageal tissue is presented in this paper. Near-infrared micro-Raman spectroscopy combined with multivariate analysis was applied for discrimination of esophageal cancer tissue from normal tissue samples. Micro-Raman spectroscopy measurements were performed on 54 esophageal cancer tissues and 55 normal tissues in the 400-1750 cm-1 range. The mean Raman spectra showed significant differences between the two groups. Tentative assignments of the Raman bands in the measured tissue spectra suggested some changes in protein structure, a decrease in the relative amount of lactose, and increases in the percentages of tryptophan, collagen and phenylalanine content in esophageal cancer tissue as compared to those of a normal subject. The diagnostic algorithms based on principal component analysis (PCA) and linear discriminate analysis (LDA) achieved a diagnostic sensitivity of 87.0% and specificity of 70.9% for separating cancer from normal esophageal tissue samples. The result demonstrated that near-infrared micro-Raman spectroscopy combined with PCA-LDA analysis could be an effective and sensitive tool for identification of esophageal cancer.

  14. Noninvasive detection of nasopharyngeal carcinoma based on saliva proteins using surface-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Lin, Xueliang; Lin, Duo; Ge, Xiaosong; Qiu, Sufang; Feng, Shangyuan; Chen, Rong

    2017-10-01

    The present study evaluated the capability of saliva analysis combining membrane protein purification with surface-enhanced Raman spectroscopy (SERS) for noninvasive detection of nasopharyngeal carcinoma (NPC). A rapid and convenient protein purification method based on cellulose acetate membrane was developed. A total of 659 high-quality SERS spectra were acquired from purified proteins extracted from the saliva samples of 170 patients with pathologically confirmed NPC and 71 healthy volunteers. Spectral analysis of those saliva protein SERS spectra revealed specific changes in some biochemical compositions, which were possibly associated with NPC transformation. Furthermore, principal component analysis combined with linear discriminant analysis (PCA-LDA) was utilized to analyze and classify the saliva protein SERS spectra from NPC and healthy subjects. Diagnostic sensitivity of 70.7%, specificity of 70.3%, and diagnostic accuracy of 70.5% could be achieved by PCA-LDA for NPC identification. These results show that this assay based on saliva protein SERS analysis holds promising potential for developing a rapid, noninvasive, and convenient clinical tool for NPC screening.

  15. Linearity and sex-specificity of impact force prediction during a fall onto the outstretched hand using a single-damper-model.

    PubMed

    Kawalilak, C E; Lanovaz, J L; Johnston, J D; Kontulainen, S A

    2014-09-01

    To assess the linearity and sex-specificity of damping coefficients used in a single-damper-model (SDM) when predicting impact forces during the worst-case falling scenario from fall heights up to 25 cm. Using 3-dimensional motion tracking and an integrated force plate, impact forces and impact velocities were assessed from 10 young adults (5 males; 5 females), falling from planted knees onto outstretched arms, from a random order of drop heights: 3, 5, 7, 10, 15, 20, and 25 cm. We assessed the linearity and sex-specificity between impact forces and impact velocities across all fall heights using analysis of variance linearity test and linear regression, respectively. Significance was accepted at P<0.05. Association between impact forces and impact velocities up to 25 cm was linear (P=0.02). Damping coefficients appeared sex-specific (males: 627 Ns/m, R(2)=0.70; females: 421 Ns/m; R(2)=0.81; sex combined: 532 Ns/m, R(2)=0.61). A linear damping coefficient used in the SDM proved valid for predicting impact forces from fall heights up to 25 cm. RESULTS suggested the use of sex-specific damping coefficients when estimating impact force using the SDM and calculating the factor-of-risk for wrist fractures.

  16. Atlas-guided volumetric diffuse optical tomography enhanced by generalized linear model analysis to image risk decision-making responses in young adults.

    PubMed

    Lin, Zi-Jing; Li, Lin; Cazzell, Mary; Liu, Hanli

    2014-08-01

    Diffuse optical tomography (DOT) is a variant of functional near infrared spectroscopy and has the capability of mapping or reconstructing three dimensional (3D) hemodynamic changes due to brain activity. Common methods used in DOT image analysis to define brain activation have limitations because the selection of activation period is relatively subjective. General linear model (GLM)-based analysis can overcome this limitation. In this study, we combine the atlas-guided 3D DOT image reconstruction with GLM-based analysis (i.e., voxel-wise GLM analysis) to investigate the brain activity that is associated with risk decision-making processes. Risk decision-making is an important cognitive process and thus is an essential topic in the field of neuroscience. The Balloon Analog Risk Task (BART) is a valid experimental model and has been commonly used to assess human risk-taking actions and tendencies while facing risks. We have used the BART paradigm with a blocked design to investigate brain activations in the prefrontal and frontal cortical areas during decision-making from 37 human participants (22 males and 15 females). Voxel-wise GLM analysis was performed after a human brain atlas template and a depth compensation algorithm were combined to form atlas-guided DOT images. In this work, we wish to demonstrate the excellence of using voxel-wise GLM analysis with DOT to image and study cognitive functions in response to risk decision-making. Results have shown significant hemodynamic changes in the dorsal lateral prefrontal cortex (DLPFC) during the active-choice mode and a different activation pattern between genders; these findings correlate well with published literature in functional magnetic resonance imaging (fMRI) and fNIRS studies. Copyright © 2014 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc.

  17. A viscous flow analysis for the tip vortex generation process

    NASA Technical Reports Server (NTRS)

    Shamroth, S. J.; Briley, W. R.

    1979-01-01

    A three dimensional, forward-marching, viscous flow analysis is applied to the tip vortex generation problem. The equations include a streamwise momentum equation, a streamwise vorticity equation, a continuity equation, and a secondary flow stream function equation. The numerical method used combines a consistently split linearized scheme for parabolic equations with a scalar iterative ADI scheme for elliptic equations. The analysis is used to identify the source of the tip vortex generation process, as well as to obtain detailed flow results for a rectangular planform wing immersed in a high Reynolds number free stream at 6 degree incidence.

  18. First Stage of a Highly Reliable Reusable Launch System

    NASA Technical Reports Server (NTRS)

    Kloesel, Kurt J.; Pickrel, Jonathan B.; Sayles, Emily L.; Wright, Michael; Marriott, Darin; Holland, Leo; Kuznetsov, Stephen

    2009-01-01

    Electromagnetic launch assist has the potential to provide a highly reliable reusable first stage to a space access system infrastructure at a lower overall cost. This paper explores the benefits of a smaller system that adds the advantages of a high specific impulse air-breathing stage and supersonic launch speeds. The method of virtual specific impulse is introduced as a tool to emphasize the gains afforded by launch assist. Analysis shows launch assist can provide a 278-s virtual specific impulse for a first-stage solid rocket. Additional trajectory analysis demonstrates that a system composed of a launch-assisted first-stage ramjet plus a bipropellant second stage can provide a 48-percent gross lift-off weight reduction versus an all-rocket system. The combination of high-speed linear induction motors and ramjets is identified, as the enabling technologies and benchtop prototypes are investigated. The high-speed response of a standard 60 Hz linear induction motor was tested with a pulse width modulated variable frequency drive to 150 Hz using a 10-lb load, achieving 150 mph. A 300-Hz stator-compensated linear induction motor was constructed and static-tested to 1900 lbf average. A matching ramjet design was developed for use on the 300-Hz linear induction motor.

  19. Classification of sodium MRI data of cartilage using machine learning.

    PubMed

    Madelin, Guillaume; Poidevin, Frederick; Makrymallis, Antonios; Regatte, Ravinder R

    2015-11-01

    To assess the possible utility of machine learning for classifying subjects with and subjects without osteoarthritis using sodium magnetic resonance imaging data. Theory: Support vector machine, k-nearest neighbors, naïve Bayes, discriminant analysis, linear regression, logistic regression, neural networks, decision tree, and tree bagging were tested. Sodium magnetic resonance imaging with and without fluid suppression by inversion recovery was acquired on the knee cartilage of 19 controls and 28 osteoarthritis patients. Sodium concentrations were measured in regions of interests in the knee for both acquisitions. Mean (MEAN) and standard deviation (STD) of these concentrations were measured in each regions of interest, and the minimum, maximum, and mean of these two measurements were calculated over all regions of interests for each subject. The resulting 12 variables per subject were used as predictors for classification. Either Min [STD] alone, or in combination with Mean [MEAN] or Min [MEAN], all from fluid suppressed data, were the best predictors with an accuracy >74%, mainly with linear logistic regression and linear support vector machine. Other good classifiers include discriminant analysis, linear regression, and naïve Bayes. Machine learning is a promising technique for classifying osteoarthritis patients and controls from sodium magnetic resonance imaging data. © 2014 Wiley Periodicals, Inc.

  20. Analysis of lithology: Vegetation mixes in multispectral images

    NASA Technical Reports Server (NTRS)

    Adams, J. B.; Smith, M.; Adams, J. D.

    1982-01-01

    Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.

  1. A meta-analysis of lasalocid effects on rumen measures, beef and dairy performance, and carcass traits in cattle.

    PubMed

    Golder, H M; Lean, I J

    2016-01-01

    The effects of lasalocid on rumen measures, beef and dairy performance, and carcass traits were evaluated using meta-analysis. Meta-regression was used to investigate sources of heterogeneity. Ten studies (20 comparisons) were used in the meta-analysis on rumen measures. Lasalocid increased total VFA and ammonia concentrations by 6.46 and 1.44 m, respectively. Lasalocid increased propionate and decreased acetate and butyrate molar percentage (M%) by 4.62, 3.18, and 0.83%, respectively. Valerate M% and pH were not affected. Meta-regression found butyrate M% linearly increased with duration of lasalocid supplementation (DUR; = 0.017). When >200 mg/d was fed, propionate and valerate M% were higher and acetate M% was lower ( = 0.042, = 0.017, and = 0.005, respectively). Beef performance was assessed using 31 studies (67 comparisons). Lasalocid increased ADG by 40 g/d, improved feed-to-gain ratio (F:G) by 410 g/kg, and improved feed efficiency (FE; combined measure of G:F and the inverse of F:G). Lasalocid did not affect DMI, but heterogeneity in DMI was influenced by DUR ( = 0.004) and the linear effect of entry BW ( = 0.011). The combination of ≤100 vs. >100 d DUR and entry BW ≤275 vs. >275 kg showed that cattle ≤275 kg at entry fed lasalocid for >100 d had the lowest DMI. Heterogeneity of ADG was influenced by the linear effect of entry BW ( = 0.028) but not DUR. Combining entry BW ≤275 vs. >275 kg and DUR showed that cattle entering at >275 kg fed ≤100 d had the highest ADG. The FE ( = 0.025) and F:G ( = 0.015) linearly improved with dose, and entry BW >275 kg improved F:G ( = 0.038). Fourteen studies (25 comparisons) were used to assess carcass traits. Lasalocid increased HCW by 4.73 kg but not dressing percentage, mean fat cover, or marbling score. Heterogeneity of carcass traits was low and not affected by DUR or dose. Seven studies (11 comparisons) were used to assess dairy performance but the study power was relatively low and the evidence base is limited. Lasalocid decreased DMI in total mixed ration-fed cows by 0.89 kg/d but had no effect on milk yield, milk components, or component yields. Dose linearly decreased DMI ( = 0.049). The DUR did not affect heterogeneity of dairy measures. This work showed that lasalocid improved ADG, HCW, FE, and F:G for beef production. These findings may reflect improved energy efficiency from increased propionate M% and decreased acetate and butyrate M%. Large dairy studies are required for further evaluation of effects of lasalocid on dairy performance.

  2. Optimization of composite box-beam structures including effects of subcomponent interactions

    NASA Technical Reports Server (NTRS)

    Ragon, Scott A.; Guerdal, Zafer; Starnes, James H., Jr.

    1995-01-01

    Minimum mass designs are obtained for a simple box beam structure subject to bending, torque and combined bending/torque load cases. These designs are obtained subject to point strain and linear buckling constraints. The present work differs from previous efforts in that special attention is payed to including the effects of subcomponent panel interaction in the optimal design process. Two different approaches are used to impose the buckling constraints. When the global approach is used, buckling constraints are imposed on the global structure via a linear eigenvalue analysis. This approach allows the subcomponent panels to interact in a realistic manner. The results obtained using this approach are compared to results obtained using a traditional, less expensive approach, called the local approach. When the local approach is used, in-plane loads are extracted from the global model and used to impose buckling constraints on each subcomponent panel individually. In the global cases, it is found that there can be significant interaction between skin, spar, and rib design variables. This coupling is weak or nonexistent in the local designs. It is determined that weight savings of up to 7% may be obtained by using the global approach instead of the local approach to design these structures. Several of the designs obtained using the linear buckling analysis are subjected to a geometrically nonlinear analysis. For the designs which were subjected to bending loads, the innermost rib panel begins to collapse at less than half the intended design load and in a mode different from that predicted by linear analysis. The discrepancy between the predicted linear and nonlinear responses is attributed to the effects of the nonlinear rib crushing load, and the parameter which controls this rib collapse failure mode is shown to be the rib thickness. The rib collapse failure mode may be avoided by increasing the rib thickness above the value obtained from the (linear analysis based) optimizer. It is concluded that it would be necessary to include geometric nonlinearities in the design optimization process if the true optimum in this case were to be found.

  3. Efficacy of an Electromechanical Gait Trainer Poststroke in Singapore: A Randomized Controlled Trial.

    PubMed

    Chua, Joyce; Culpan, Jane; Menon, Edward

    2016-05-01

    To evaluate the longer-term effects of electromechanical gait trainers (GTs) combined with conventional physiotherapy on health status, function, and ambulation in people with subacute stroke in comparison with conventional physiotherapy given alone. Randomized controlled trial with intention-to-treat analysis. Community hospital in Singapore. Nonambulant individuals (N=106) recruited approximately 1 month poststroke. Both groups received 45 minutes of physiotherapy 6 times per week for 8 weeks as follows: the GT group received 20 minutes of GT training and 5 minutes of stance/gait training in contrast with 25 minutes of stance/gait training for the control group. Both groups completed 10 minutes of standing and 10 minutes of cycling. The primary outcome was the Functional Ambulation Category (FAC). Secondary outcomes were the Barthel Index (BI), gait speed and endurance, and Stroke Impact Scale (SIS). Measures were taken at baseline and 4, 8, 12, 24, and 48 weeks. Generalized linear model analysis showed significant improvement over time (independent of group) for the FAC, BI, and SIS physical and participation subscales. However, no significant group × time or group differences were observed for any of the outcome variables after generalized linear model analysis. The use of GTs combined with conventional physiotherapy can be as effective as conventional physiotherapy applied alone for people with subacute stroke. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  4. Application of Higuchi's fractal dimension from basic to clinical neurophysiology: A review.

    PubMed

    Kesić, Srdjan; Spasić, Sladjana Z

    2016-09-01

    For more than 20 years, Higuchi's fractal dimension (HFD), as a nonlinear method, has occupied an important place in the analysis of biological signals. The use of HFD has evolved from EEG and single neuron activity analysis to the most recent application in automated assessments of different clinical conditions. Our objective is to provide an updated review of the HFD method applied in basic and clinical neurophysiological research. This article summarizes and critically reviews a broad literature and major findings concerning the applications of HFD for measuring the complexity of neuronal activity during different neurophysiological conditions. The source of information used in this review comes from the PubMed, Scopus, Google Scholar and IEEE Xplore Digital Library databases. The review process substantiated the significance, advantages and shortcomings of HFD application within all key areas of basic and clinical neurophysiology. Therefore, the paper discusses HFD application alone, combined with other linear or nonlinear measures, or as a part of automated methods for analyzing neurophysiological signals. The speed, accuracy and cost of applying the HFD method for research and medical diagnosis make it stand out from the widely used linear methods. However, only a combination of HFD with other nonlinear methods ensures reliable and accurate analysis of a wide range of neurophysiological signals. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Predictors of burnout among correctional mental health professionals.

    PubMed

    Gallavan, Deanna B; Newman, Jody L

    2013-02-01

    This study focused on the experience of burnout among a sample of correctional mental health professionals. We examined the relationship of a linear combination of optimism, work family conflict, and attitudes toward prisoners with two dimensions derived from the Maslach Burnout Inventory and the Professional Quality of Life Scale. Initially, three subscales from the Maslach Burnout Inventory and two subscales from the Professional Quality of Life Scale were subjected to principal components analysis with oblimin rotation in order to identify underlying dimensions among the subscales. This procedure resulted in two components accounting for approximately 75% of the variance (r = -.27). The first component was labeled Negative Experience of Work because it seemed to tap the experience of being emotionally spent, detached, and socially avoidant. The second component was labeled Positive Experience of Work and seemed to tap a sense of competence, success, and satisfaction in one's work. Two multiple regression analyses were subsequently conducted, in which Negative Experience of Work and Positive Experience of Work, respectively, were predicted from a linear combination of optimism, work family conflict, and attitudes toward prisoners. In the first analysis, 44% of the variance in Negative Experience of Work was accounted for, with work family conflict and optimism accounting for the most variance. In the second analysis, 24% of the variance in Positive Experience of Work was accounted for, with optimism and attitudes toward prisoners accounting for the most variance.

  6. Investigating the sex-related geometric variation of the human cranium.

    PubMed

    Bertsatos, Andreas; Papageorgopoulou, Christina; Valakos, Efstratios; Chovalopoulou, Maria-Eleni

    2018-01-29

    Accurate sexing methods are of great importance in forensic anthropology since sex assessment is among the principal tasks when examining human skeletal remains. The present study explores a novel approach in assessing the most accurate metric traits of the human cranium for sex estimation based on 80 ectocranial landmarks from 176 modern individuals of known age and sex from the Athens Collection. The purpose of the study is to identify those distance and angle measurements that can be most effectively used in sex assessment. Three-dimensional landmark coordinates were digitized with a Microscribe 3DX and analyzed in GNU Octave. An iterative linear discriminant analysis of all possible combinations of landmarks was performed for each unique set of the 3160 distances and 246,480 angles. Cross-validated correct classification as well as multivariate DFA on top performing variables reported 13 craniometric distances with over 85% classification accuracy, 7 angles over 78%, as well as certain multivariate combinations yielding over 95%. Linear regression of these variables with the centroid size was used to assess their relation to the size of the cranium. In contrast to the use of generalized procrustes analysis (GPA) and principal component analysis (PCA), which constitute the common analytical work flow for such data, our method, although computational intensive, produced easily applicable discriminant functions of high accuracy, while at the same time explored the maximum of cranial variability.

  7. Quantitative analysis of glycated albumin in serum based on ATR-FTIR spectrum combined with SiPLS and SVM.

    PubMed

    Li, Yuanpeng; Li, Fucui; Yang, Xinhao; Guo, Liu; Huang, Furong; Chen, Zhenqiang; Chen, Xingdan; Zheng, Shifu

    2018-08-05

    A rapid quantitative analysis model for determining the glycated albumin (GA) content based on Attenuated total reflectance (ATR)-Fourier transform infrared spectroscopy (FTIR) combining with linear SiPLS and nonlinear SVM has been developed. Firstly, the real GA content in human serum was determined by GA enzymatic method, meanwhile, the ATR-FTIR spectra of serum samples from the population of health examination were obtained. The spectral data of the whole spectra mid-infrared region (4000-600 cm -1 ) and GA's characteristic region (1800-800 cm -1 ) were used as the research object of quantitative analysis. Secondly, several preprocessing steps including first derivative, second derivative, variable standardization and spectral normalization, were performed. Lastly, quantitative analysis regression models were established by using SiPLS and SVM respectively. The SiPLS modeling results are as follows: root mean square error of cross validation (RMSECV T ) = 0.523 g/L, calibration coefficient (R C ) = 0.937, Root Mean Square Error of Prediction (RMSEP T ) = 0.787 g/L, and prediction coefficient (R P ) = 0.938. The SVM modeling results are as follows: RMSECV T  = 0.0048 g/L, R C  = 0.998, RMSEP T  = 0.442 g/L, and R p  = 0.916. The results indicated that the model performance was improved significantly after preprocessing and optimization of characteristic regions. While modeling performance of nonlinear SVM was considerably better than that of linear SiPLS. Hence, the quantitative analysis model for GA in human serum based on ATR-FTIR combined with SiPLS and SVM is effective. And it does not need sample preprocessing while being characterized by simple operations and high time efficiency, providing a rapid and accurate method for GA content determination. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Second Law of Thermodynamics Applied to Metabolic Networks

    NASA Technical Reports Server (NTRS)

    Nigam, R.; Liang, S.

    2003-01-01

    We present a simple algorithm based on linear programming, that combines Kirchoff's flux and potential laws and applies them to metabolic networks to predict thermodynamically feasible reaction fluxes. These law's represent mass conservation and energy feasibility that are widely used in electrical circuit analysis. Formulating the Kirchoff's potential law around a reaction loop in terms of the null space of the stoichiometric matrix leads to a simple representation of the law of entropy that can be readily incorporated into the traditional flux balance analysis without resorting to non-linear optimization. Our technique is new as it can easily check the fluxes got by applying flux balance analysis for thermodynamic feasibility and modify them if they are infeasible so that they satisfy the law of entropy. We illustrate our method by applying it to the network dealing with the central metabolism of Escherichia coli. Due to its simplicity this algorithm will be useful in studying large scale complex metabolic networks in the cell of different organisms.

  9. The Analysis and Construction of Perfectly Matched Layers for the Linearized Euler Equations

    NASA Technical Reports Server (NTRS)

    Hesthaven, J. S.

    1997-01-01

    We present a detailed analysis of a recently proposed perfectly matched layer (PML) method for the absorption of acoustic waves. The split set of equations is shown to be only weakly well-posed, and ill-posed under small low order perturbations. This analysis provides the explanation for the stability problems associated with the split field formulation and illustrates why applying a filter has a stabilizing effect. Utilizing recent results obtained within the context of electromagnetics, we develop strongly well-posed absorbing layers for the linearized Euler equations. The schemes are shown to be perfectly absorbing independent of frequency and angle of incidence of the wave in the case of a non-convecting mean flow. In the general case of a convecting mean flow, a number of techniques is combined to obtain a absorbing layers exhibiting PML-like behavior. The efficacy of the proposed absorbing layers is illustrated though computation of benchmark problems in aero-acoustics.

  10. Sampling with poling-based flux balance analysis: optimal versus sub-optimal flux space analysis of Actinobacillus succinogenes.

    PubMed

    Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos

    2015-02-18

    Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without linear bias to show optimal versus sub-optimal solution spaces. Basic analysis of the Actinobacillus succinogenes system using sampling shows that in order to achieve the maximal succinic acid production CO₂ must be taken into the system. Solutions involving release of CO₂ all give sub-optimal succinic acid production.

  11. A simple method for identifying parameter correlations in partially observed linear dynamic models.

    PubMed

    Li, Pu; Vu, Quoc Dong

    2015-12-14

    Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a software packet.

  12. On the Impact of a Quadratic Acceleration Term in the Analysis of Position Time Series

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Klos, Anna; Bos, Machiel Simon; Hunegnaw, Addisu; Teferle, Felix Norman

    2016-04-01

    The analysis of Global Navigation Satellite System (GNSS) position time series generally assumes that each of the coordinate component series is described by the sum of a linear rate (velocity) and various periodic terms. The residuals, the deviations between the fitted model and the observations, are then a measure of the epoch-to-epoch scatter and have been used for the analysis of the stochastic character (noise) of the time series. Often the parameters of interest in GNSS position time series are the velocities and their associated uncertainties, which have to be determined with the highest reliability. It is clear that not all GNSS position time series follow this simple linear behaviour. Therefore, we have added an acceleration term in the form of a quadratic polynomial function to the model in order to better describe the non-linear motion in the position time series. This non-linear motion could be a response to purely geophysical processes, for example, elastic rebound of the Earth's crust due to ice mass loss in Greenland, artefacts due to deficiencies in bias mitigation models, for example, of the GNSS satellite and receiver antenna phase centres, or any combination thereof. In this study we have simulated 20 time series with different stochastic characteristics such as white, flicker or random walk noise of length of 23 years. The noise amplitude was assumed at 1 mm/y-/4. Then, we added the deterministic part consisting of a linear trend of 20 mm/y (that represents the averaged horizontal velocity) and accelerations ranging from minus 0.6 to plus 0.6 mm/y2. For all these data we estimated the noise parameters with Maximum Likelihood Estimation (MLE) using the Hector software package without taken into account the non-linear term. In this way we set the benchmark to then investigate how the noise properties and velocity uncertainty may be affected by any un-modelled, non-linear term. The velocities and their uncertainties versus the accelerations for different types of noise are determined. Furthermore, we have selected 40 globally distributed stations that have a clear non-linear behaviour from two different International GNSS Service (IGS) analysis centers: JPL (Jet Propulsion Laboratory) and BLT (British Isles continuous GNSS Facility and University of Luxembourg Tide Gauge Benchmark Monitoring (TIGA) Analysis Center). We obtained maximum accelerations of -1.8±1.2 mm2/y and -4.5±3.3 mm2/y for the horizontal and vertical components, respectively. The noise analysis tests have shown that the addition of the non-linear term has significantly whitened the power spectra of the position time series, i.e. shifted the spectral index from flicker towards white noise.

  13. Effect of removing the common mode errors on linear regression analysis of noise amplitudes in position time series of a regional GPS network & a case study of GPS stations in Southern California

    NASA Astrophysics Data System (ADS)

    Jiang, Weiping; Ma, Jun; Li, Zhao; Zhou, Xiaohui; Zhou, Boye

    2018-05-01

    The analysis of the correlations between the noise in different components of GPS stations has positive significance to those trying to obtain more accurate uncertainty of velocity with respect to station motion. Previous research into noise in GPS position time series focused mainly on single component evaluation, which affects the acquisition of precise station positions, the velocity field, and its uncertainty. In this study, before and after removing the common-mode error (CME), we performed one-dimensional linear regression analysis of the noise amplitude vectors in different components of 126 GPS stations with a combination of white noise, flicker noise, and random walking noise in Southern California. The results show that, on the one hand, there are above-moderate degrees of correlation between the white noise amplitude vectors in all components of the stations before and after removal of the CME, while the correlations between flicker noise amplitude vectors in horizontal and vertical components are enhanced from un-correlated to moderately correlated by removing the CME. On the other hand, the significance tests show that, all of the obtained linear regression equations, which represent a unique function of the noise amplitude in any two components, are of practical value after removing the CME. According to the noise amplitude estimates in two components and the linear regression equations, more accurate noise amplitudes can be acquired in the two components.

  14. Linear and non-linear Modified Gravity forecasts with future surveys

    NASA Astrophysics Data System (ADS)

    Casas, Santiago; Kunz, Martin; Martinelli, Matteo; Pettorino, Valeria

    2017-12-01

    Modified Gravity theories generally affect the Poisson equation and the gravitational slip in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin their time dependence in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. In this work we neglect the information from the cross correlation of these observables, and treat them as independent. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further apply a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We extend the analysis to two particular parameterizations of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 2-5% level when using only linear scales (wavevector k < 0 . 15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.

  15. Control Surface Interaction Effects of the Active Aeroelastic Wing Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer

    2006-01-01

    This paper presents results from testing the Active Aeroelastic Wing wind tunnel model in NASA Langley s Transonic Dynamics Tunnel. The wind tunnel test provided an opportunity to study aeroelastic system behavior under combined control surface deflections, testing for control surface interaction effects. Control surface interactions were observed in both static control surface actuation testing and dynamic control surface oscillation testing. The primary method of evaluating interactions was examination of the goodness of the linear superposition assumptions. Responses produced by independently actuating single control surfaces were combined and compared with those produced by simultaneously actuating and oscillating multiple control surfaces. Adjustments to the data were required to isolate the control surface influences. Using dynamic data, the task increases, as both the amplitude and phase have to be considered in the data corrections. The goodness of static linear superposition was examined and analysis of variance was used to evaluate significant factors influencing that goodness. The dynamic data showed interaction effects in both the aerodynamic measurements and the structural measurements.

  16. Quantification of Liver Proton-Density Fat Fraction in an 7.1 Tesla preclinical MR Systems: Impact of the Fitting Technique

    PubMed Central

    Mahlke, C; Hernando, D; Jahn, C; Cigliano, A; Ittermann, T; Mössler, A; Kromrey, ML; Domaska, G; Reeder, SB; Kühn, JP

    2016-01-01

    Purpose To investigate the feasibility of estimating the proton-density fat fraction (PDFF) using a 7.1 Tesla magnetic resonance imaging (MRI) system and to compare the accuracy of liver fat quantification using different fitting approaches. Materials and Methods Fourteen leptin-deficient ob/ob mice and eight intact controls were examined in a 7.1 Tesla animal scanner using a 3-dimensional six-echo chemical shift-encoded pulse sequence. Confounder-corrected PDFF was calculated using magnitude (magnitude data alone) and combined fitting (complex and magnitude data). Differences between fitting techniques were compared using Bland-Altman analysis. In addition, PDFFs derived with both reconstructions were correlated with histopathological fat content and triglyceride mass fraction using linear regression analysis. Results The PDFFs determined with use of both reconstructions correlated very strongly (r=0.91). However, small mean bias between reconstructions demonstrated divergent results (3.9%; CI 2.7%-5.1%). For both reconstructions, there was linear correlation with histopathology (combined fitting: r=0.61; magnitude fitting: r=0.64) and triglyceride content (combined fitting: r=0.79; magnitude fitting: r=0.70). Conclusion Liver fat quantification using the PDFF derived from MRI performed at 7.1 Tesla is feasible. PDFF has strong correlations with histopathologically determined fat and with triglyceride content. However, small differences between PDFF reconstruction techniques may impair the robustness and reliability of the biomarker at 7.1 Tesla. PMID:27197806

  17. Neuromorphic log-domain silicon synapse circuits obey bernoulli dynamics: a unifying tutorial analysis

    PubMed Central

    Papadimitriou, Konstantinos I.; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M.

    2014-01-01

    The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4th) order topology. PMID:25653579

  18. Neuromorphic log-domain silicon synapse circuits obey bernoulli dynamics: a unifying tutorial analysis.

    PubMed

    Papadimitriou, Konstantinos I; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M

    2014-01-01

    The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4(th)) order topology.

  19. Hekate: Software Suite for the Mass Spectrometric Analysis and Three-Dimensional Visualization of Cross-Linked Protein Samples

    PubMed Central

    2013-01-01

    Chemical cross-linking of proteins combined with mass spectrometry provides an attractive and novel method for the analysis of native protein structures and protein complexes. Analysis of the data however is complex. Only a small number of cross-linked peptides are produced during sample preparation and must be identified against a background of more abundant native peptides. To facilitate the search and identification of cross-linked peptides, we have developed a novel software suite, named Hekate. Hekate is a suite of tools that address the challenges involved in analyzing protein cross-linking experiments when combined with mass spectrometry. The software is an integrated pipeline for the automation of the data analysis workflow and provides a novel scoring system based on principles of linear peptide analysis. In addition, it provides a tool for the visualization of identified cross-links using three-dimensional models, which is particularly useful when combining chemical cross-linking with other structural techniques. Hekate was validated by the comparative analysis of cytochrome c (bovine heart) against previously reported data.1 Further validation was carried out on known structural elements of DNA polymerase III, the catalytic α-subunit of the Escherichia coli DNA replisome along with new insight into the previously uncharacterized C-terminal domain of the protein. PMID:24010795

  20. Copula Entropy coupled with Wavelet Neural Network Model for Hydrological Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Yin; Yue, JiGuang; Liu, ShuGuang; Wang, Li

    2018-02-01

    Artificial Neural network(ANN) has been widely used in hydrological forecasting. in this paper an attempt has been made to find an alternative method for hydrological prediction by combining Copula Entropy(CE) with Wavelet Neural Network(WNN), CE theory permits to calculate mutual information(MI) to select Input variables which avoids the limitations of the traditional linear correlation(LCC) analysis. Wavelet analysis can provide the exact locality of any changes in the dynamical patterns of the sequence Coupled with ANN Strong non-linear fitting ability. WNN model was able to provide a good fit with the hydrological data. finally, the hybrid model(CE+WNN) have been applied to daily water level of Taihu Lake Basin, and compared with CE ANN, LCC WNN and LCC ANN. Results showed that the hybrid model produced better results in estimating the hydrograph properties than the latter models.

  1. Theory of chromatic noise masking applied to testing linearity of S-cone detection mechanisms.

    PubMed

    Giulianini, Franco; Eskew, Rhea T

    2007-09-01

    A method for testing the linearity of cone combination of chromatic detection mechanisms is applied to S-cone detection. This approach uses the concept of mechanism noise, the noise as seen by a postreceptoral neural mechanism, to represent the effects of superposing chromatic noise components in elevating thresholds and leads to a parameter-free prediction for a linear mechanism. The method also provides a test for the presence of multiple linear detectors and off-axis looking. No evidence for multiple linear mechanisms was found when using either S-cone increment or decrement tests. The results for both S-cone test polarities demonstrate that these mechanisms combine their cone inputs nonlinearly.

  2. Analytical modeling and tolerance analysis of a linear variable filter for spectral order sorting.

    PubMed

    Ko, Cheng-Hao; Chang, Kuei-Ying; Huang, You-Min

    2015-02-23

    This paper proposes an innovative method to overcome the low production rate of current linear variable filter (LVF) fabrication. During the fabrication process, a commercial coater is combined with a local mask on a substrate. The proposed analytical thin film thickness model, which is based on the geometry of the commercial coater, is developed to more effectively calculate the profiles of LVFs. Thickness tolerance, LVF zone width, thin film layer structure, transmission spectrum and the effects of variations in critical parameters of the coater are analyzed. Profile measurements demonstrate the efficacy of local mask theory in the prediction of evaporation profiles with a high degree of accuracy.

  3. Wavelet packets for multi- and hyper-spectral imagery

    NASA Astrophysics Data System (ADS)

    Benedetto, J. J.; Czaja, W.; Ehler, M.; Flake, C.; Hirn, M.

    2010-01-01

    State of the art dimension reduction and classification schemes in multi- and hyper-spectral imaging rely primarily on the information contained in the spectral component. To better capture the joint spatial and spectral data distribution we combine the Wavelet Packet Transform with the linear dimension reduction method of Principal Component Analysis. Each spectral band is decomposed by means of the Wavelet Packet Transform and we consider a joint entropy across all the spectral bands as a tool to exploit the spatial information. Dimension reduction is then applied to the Wavelet Packets coefficients. We present examples of this technique for hyper-spectral satellite imaging. We also investigate the role of various shrinkage techniques to model non-linearity in our approach.

  4. Proposal for an astronaut mass measurement device for the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Beyer, Neil; Lomme, Jon; Mccollough, Holly; Price, Bradford; Weber, Heidi

    1994-01-01

    For medical reasons, astronauts in space need to have their mass measured. Currently, this measurement is performed using a mass-spring system. The current system is large, inaccurate, and uncomfortable for the astronauts. NASA is looking for new, different, and preferably better ways to perform this measurement process. After careful analysis our design team decided on a linear acceleration process. Within the process, four possible concept variants are put forth. Among these four variants, one is suggested over the others. The variant suggested is that of a motor-winch system to linearly accelerate the astronaut. From acceleration and force measurements of the process combined Newton's second law, the mass of an astronaut can be calculated.

  5. A computational procedure to analyze metal matrix laminates with nonlinear lamination residual strains

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Sullivan, T. L.

    1974-01-01

    An approximate computational procedure is described for the analysis of angleplied laminates with residual nonlinear strains. The procedure consists of a combination of linear composite mechanics and incremental linear laminate theory. The procedure accounts for initial nonlinear strains, unloading, and in-situ matrix orthotropic nonlinear behavior. The results obtained in applying the procedure to boron/aluminum angleplied laminates show that this is a convenient means to accurately predict the initial tangent properties of angleplied laminates in which the matrix has been strained nonlinearly by the lamination residual stresses. The procedure predicted initial tangent properties results which were in good agreement with measured data obtained from boron/aluminum angleplied laminates.

  6. Early Parallel Activation of Semantics and Phonology in Picture Naming: Evidence from a Multiple Linear Regression MEG Study

    PubMed Central

    Miozzo, Michele; Pulvermüller, Friedemann; Hauk, Olaf

    2015-01-01

    The time course of brain activation during word production has become an area of increasingly intense investigation in cognitive neuroscience. The predominant view has been that semantic and phonological processes are activated sequentially, at about 150 and 200–400 ms after picture onset. Although evidence from prior studies has been interpreted as supporting this view, these studies were arguably not ideally suited to detect early brain activation of semantic and phonological processes. We here used a multiple linear regression approach to magnetoencephalography (MEG) analysis of picture naming in order to investigate early effects of variables specifically related to visual, semantic, and phonological processing. This was combined with distributed minimum-norm source estimation and region-of-interest analysis. Brain activation associated with visual image complexity appeared in occipital cortex at about 100 ms after picture presentation onset. At about 150 ms, semantic variables became physiologically manifest in left frontotemporal regions. In the same latency range, we found an effect of phonological variables in the left middle temporal gyrus. Our results demonstrate that multiple linear regression analysis is sensitive to early effects of multiple psycholinguistic variables in picture naming. Crucially, our results suggest that access to phonological information might begin in parallel with semantic processing around 150 ms after picture onset. PMID:25005037

  7. Fruit and vegetable intake and risk of type 2 diabetes mellitus: meta-analysis of prospective cohort studies

    PubMed Central

    Li, Min; Fan, Yingli; Zhang, Xiaowei; Hou, Wenshang; Tang, Zhenyu

    2014-01-01

    Objective To clarify and quantify the potential dose–response association between the intake of fruit and vegetables and risk of type 2 diabetes. Design Meta-analysis and systematic review of prospective cohort studies. Data source Studies published before February 2014 identified through electronic searches using PubMed and Embase. Eligibility criteria for selecting studies Prospective cohort studies with relative risks and 95% CIs for type 2 diabetes according to the intake of fruit, vegetables, or fruit and vegetables. Results A total of 10 articles including 13 comparisons with 24 013 cases of type 2 diabetes and 434 342 participants were included in the meta-analysis. Evidence of curve linear associations was seen between fruit and green leafy vegetables consumption and risk of type 2 diabetes (p=0.059 and p=0.036 for non-linearity, respectively). The summary relative risk of type 2 diabetes for an increase of 1 serving fruit consumed/day was 0.93 (95% CI 0.88 to 0.99) without heterogeneity among studies (p=0.477, I2=0%). For vegetables, the combined relative risk of type 2 diabetes for an increase of 1 serving consumed/day was 0.90 (95% CI 0.80 to 1.01) with moderate heterogeneity among studies (p=0.002, I2=66.5%). For green leafy vegetables, the summary relative risk of type 2 diabetes for an increase of 0.2 serving consumed/day was 0.87 (95% CI 0.81 to 0.93) without heterogeneity among studies (p=0.496, I2=0%). The combined estimates showed no significant benefits of increasing the consumption of fruit and vegetables combined. Conclusions Higher fruit or green leafy vegetables intake is associated with a significantly reduced risk of type 2 diabetes. PMID:25377009

  8. Identification of Dyslipidemic Patients Attending Primary Care Clinics Using Electronic Medical Record (EMR) Data from the Canadian Primary Care Sentinel Surveillance Network (CPCSSN) Database.

    PubMed

    Aref-Eshghi, Erfan; Oake, Justin; Godwin, Marshall; Aubrey-Bassler, Kris; Duke, Pauline; Mahdavian, Masoud; Asghari, Shabnam

    2017-03-01

    The objective of this study was to define the optimal algorithm to identify patients with dyslipidemia using electronic medical records (EMRs). EMRs of patients attending primary care clinics in St. John's, Newfoundland and Labrador (NL), Canada during 2009-2010, were studied to determine the best algorithm for identification of dyslipidemia. Six algorithms containing three components, dyslipidemia ICD coding, lipid lowering medication use, and abnormal laboratory lipid levels, were tested against a gold standard, defined as the existence of any of the three criteria. Linear discriminate analysis, and bootstrapping were performed following sensitivity/specificity testing and receiver's operating curve analysis. Two validating datasets, NL records of 2011-2014, and Canada-wide records of 2010-2012, were used to replicate the results. Relative to the gold standard, combining laboratory data together with lipid lowering medication consumption yielded the highest sensitivity (99.6%), NPV (98.1%), Kappa agreement (0.98), and area under the curve (AUC, 0.998). The linear discriminant analysis for this combination resulted in an error rate of 0.15 and an Eigenvalue of 1.99, and the bootstrapping led to AUC: 0.998, 95% confidence interval: 0.997-0.999, Kappa: 0.99. This algorithm in the first validating dataset yielded a sensitivity of 97%, Negative Predictive Value (NPV) = 83%, Kappa = 0.88, and AUC = 0.98. These figures for the second validating data set were 98%, 93%, 0.95, and 0.99, respectively. Combining laboratory data with lipid lowering medication consumption within the EMR is the best algorithm for detecting dyslipidemia. These results can generate standardized information systems for dyslipidemia and other chronic disease investigations using EMRs.

  9. The effect of choosing three different C factor formulae derived from NDVI on a fully raster-based erosion modelling

    NASA Astrophysics Data System (ADS)

    Sulistyo, Bambang

    2016-11-01

    The research was aimed at studying the efect of choosing three different C factor formulae derived from NDVI on a fully raster-based erosion modelling of The USLE using remote sensing data and GIS technique. Methods applied was by analysing all factors affecting erosion such that all data were in the form of raster. Those data were R, K, LS, C and P factors. Monthly R factor was evaluated based on formula developed by Abdurachman. K factor was determined using modified formula used by Ministry of Forestry based on soil samples taken in the field. LS factor was derived from Digital Elevation Model. Three C factors used were all derived from NDVI and developed by Suriyaprasit (non-linear) and by Sulistyo (linear and non-linear). P factor was derived from the combination between slope data and landcover classification interpreted from Landsat 7 ETM+. Another analysis was the creation of map of Bulk Density used to convert erosion unit. To know the model accuracy, model validation was done by applying statistical analysis and by comparing Emodel with Eactual. A threshold value of ≥ 0.80 or ≥ 80% was chosen to justify. The research result showed that all Emodel using three formulae of C factors have coeeficient of correlation value of > 0.8. The results of analysis of variance showed that there was significantly difference between Emodel and Eactual when using C factor formula developed by Suriyaprasit and Sulistyo (non-linear). Among the three formulae, only Emodel using C factor formula developed by Sulistyo (linear) reached the accuracy of 81.13% while the other only 56.02% as developed by Sulistyo (nonlinear) and 4.70% as developed by Suriyaprasit, respectively.

  10. Comparative analysis of linear motor geometries for Stirling coolers

    NASA Astrophysics Data System (ADS)

    R, Rajesh V.; Kuzhiveli, Biju T.

    2017-12-01

    Compared to rotary motor driven Stirling coolers, linear motor coolers are characterized by small volume and long life, making them more suitable for space and military applications. The motor design and operational characteristics have a direct effect on the operation of the cooler. In this perspective, ample scope exists in understanding the behavioural description of linear motor systems. In the present work, the authors compare and analyze different moving magnet linear motor geometries to finalize the most favourable one for Stirling coolers. The required axial force in the linear motors is generated by the interaction of magnetic fields of a current carrying coil and that of a permanent magnet. The compact size, commercial availability of permanent magnets and low weight requirement of the system are quite a few constraints for the design. The finite element analysis performed using Maxwell software serves as the basic tool to analyze the magnet movement, flux distribution in the air gap and the magnetic saturation levels on the core. A number of material combinations are investigated for core before finalizing the design. The effect of varying the core geometry on the flux produced in the air gap is also analyzed. The electromagnetic analysis of the motor indicates that the permanent magnet height ought to be taken in such a way that it is under the influence of electromagnetic field of current carrying coil as well as the outer core in the balanced position. This is necessary so that sufficient amount of thrust force is developed by efficient utilisation of the air gap flux density. Also, the outer core ends need to be designed to facilitate enough room for the magnet movement under the operating conditions.

  11. Ranking Forestry Investments With Parametric Linear Programming

    Treesearch

    Paul A. Murphy

    1976-01-01

    Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.

  12. Genetic overlap between diagnostic subtypes of ischemic stroke.

    PubMed

    Holliday, Elizabeth G; Traylor, Matthew; Malik, Rainer; Bevan, Steve; Falcone, Guido; Hopewell, Jemma C; Cheng, Yu-Ching; Cotlarciuc, Ioana; Bis, Joshua C; Boerwinkle, Eric; Boncoraglio, Giorgio B; Clarke, Robert; Cole, John W; Fornage, Myriam; Furie, Karen L; Ikram, M Arfan; Jannes, Jim; Kittner, Steven J; Lincz, Lisa F; Maguire, Jane M; Meschia, James F; Mosley, Thomas H; Nalls, Mike A; Oldmeadow, Christopher; Parati, Eugenio A; Psaty, Bruce M; Rothwell, Peter M; Seshadri, Sudha; Scott, Rodney J; Sharma, Pankaj; Sudlow, Cathie; Wiggins, Kerri L; Worrall, Bradford B; Rosand, Jonathan; Mitchell, Braxton D; Dichgans, Martin; Markus, Hugh S; Levi, Christopher; Attia, John; Wray, Naomi R

    2015-03-01

    Despite moderate heritability, the phenotypic heterogeneity of ischemic stroke has hampered gene discovery, motivating analyses of diagnostic subtypes with reduced sample sizes. We assessed evidence for a shared genetic basis among the 3 major subtypes: large artery atherosclerosis (LAA), cardioembolism, and small vessel disease (SVD), to inform potential cross-subtype analyses. Analyses used genome-wide summary data for 12 389 ischemic stroke cases (including 2167 LAA, 2405 cardioembolism, and 1854 SVD) and 62 004 controls from the Metastroke consortium. For 4561 cases and 7094 controls, individual-level genotype data were also available. Genetic correlations between subtypes were estimated using linear mixed models and polygenic profile scores. Meta-analysis of a combined LAA-SVD phenotype (4021 cases and 51 976 controls) was performed to identify shared risk alleles. High genetic correlation was identified between LAA and SVD using linear mixed models (rg=0.96, SE=0.47, P=9×10(-4)) and profile scores (rg=0.72; 95% confidence interval, 0.52-0.93). Between LAA and cardioembolism and SVD and cardioembolism, correlation was moderate using linear mixed models but not significantly different from zero for profile scoring. Joint meta-analysis of LAA and SVD identified strong association (P=1×10(-7)) for single nucleotide polymorphisms near the opioid receptor μ1 (OPRM1) gene. Our results suggest that LAA and SVD, which have been hitherto treated as genetically distinct, may share a substantial genetic component. Combined analyses of LAA and SVD may increase power to identify small-effect alleles influencing shared pathophysiological processes. © 2015 American Heart Association, Inc.

  13. The seasonal response of the Held-Suarez climate model to prescribed ocean temperature anomalies. II - Dynamical analysis

    NASA Technical Reports Server (NTRS)

    Phillips, T. J.

    1984-01-01

    The heating associated with equatorial, subtropical, and midlatitude ocean temperature anamolies in the Held-Suarez climate model is analyzed. The local and downstream response to the anomalies is analyzed, first by examining the seasonal variation in heating associated with each ocean temperature anomaly, and then by combining knowledge of the heating with linear dynamical theory in order to develop a more comprehensive explanation of the seasonal variation in local and downstream atmospheric response to each anomaly. The extent to which the linear theory of propagating waves can assist the interpretation of the remote cross-latitudinal response of the model to the ocean temperature anomalies is considered. Alternative hypotheses that attempt to avoid the contradictions inherent in a strict application of linear theory are investigated, and the impact of sampling errors on the assessment of statistical significance is also examined.

  14. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    NASA Astrophysics Data System (ADS)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  15. Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models

    NASA Astrophysics Data System (ADS)

    Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael

    2016-06-01

    We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.

  16. Identification of pesticide varieties by testing microalgae using Visible/Near Infrared Hyperspectral Imaging technology

    NASA Astrophysics Data System (ADS)

    Shao, Yongni; Jiang, Linjun; Zhou, Hong; Pan, Jian; He, Yong

    2016-04-01

    In our study, the feasibility of using visible/near infrared hyperspectral imaging technology to detect the changes of the internal components of Chlorella pyrenoidosa so as to determine the varieties of pesticides (such as butachlor, atrazine and glyphosate) at three concentrations (0.6 mg/L, 3 mg/L, 15 mg/L) was investigated. Three models (partial least squares discriminant analysis combined with full wavelengths, FW-PLSDA; partial least squares discriminant analysis combined with competitive adaptive reweighted sampling algorithm, CARS-PLSDA; linear discrimination analysis combined with regression coefficients, RC-LDA) were built by the hyperspectral data of Chlorella pyrenoidosa to find which model can produce the most optimal result. The RC-LDA model, which achieved an average correct classification rate of 97.0% was more superior than FW-PLSDA (72.2%) and CARS-PLSDA (84.0%), and it proved that visible/near infrared hyperspectral imaging could be a rapid and reliable technique to identify pesticide varieties. It also proved that microalgae can be a very promising medium to indicate characteristics of pesticides.

  17. Effect of electric potential and current on mandibular linear measurements in cone beam CT.

    PubMed

    Panmekiate, S; Apinhasmit, W; Petersson, A

    2012-10-01

    The purpose of this study was to compare mandibular linear distances measured from cone beam CT (CBCT) images produced by different radiographic parameter settings (peak kilovoltage and milliampere value). 20 cadaver hemimandibles with edentulous ridges posterior to the mental foramen were embedded in clear resin blocks and scanned by a CBCT machine (CB MercuRay(TM); Hitachi Medico Technology Corp., Chiba-ken, Japan). The radiographic parameters comprised four peak kilovoltage settings (60 kVp, 80 kVp, 100 kVp and 120 kVp) and two milliampere settings (10 mA and 15 mA). A 102.4 mm field of view was chosen. Each hemimandible was scanned 8 times with 8 different parameter combinations resulting in 160 CBCT data sets. On the cross-sectional images, six linear distances were measured. To assess the intraobserver variation, the 160 data sets were remeasured after 2 weeks. The measurement precision was calculated using Dahlberg's formula. With the same peak kilovoltage, the measurements yielded by different milliampere values were compared using the paired t-test. With the same milliampere value, the measurements yielded by different peak kilovoltage were compared using analysis of variance. A significant difference was considered when p < 0.05. Measurement precision varied from 0.03 mm to 0.28 mm. No significant differences in the distances were found among the different radiographic parameter combinations. Based upon the specific machine in the present study, low peak kilovoltage and milliampere value might be used for linear measurements in the posterior mandible.

  18. Combining linear polarization spectroscopy and the Representative Layer Theory to measure the Beer-Lambert law absorbance of highly scattering materials.

    PubMed

    Gobrecht, Alexia; Bendoula, Ryad; Roger, Jean-Michel; Bellon-Maurel, Véronique

    2015-01-01

    Visible and Near Infrared (Vis-NIR) Spectroscopy is a powerful non destructive analytical method used to analyze major compounds in bulk materials and products and requiring no sample preparation. It is widely used in routine analysis and also in-line in industries, in-vivo with biomedical applications or in-field for agricultural and environmental applications. However, highly scattering samples subvert Beer-Lambert law's linear relationship between spectral absorbance and the concentrations. Instead of spectral pre-processing, which is commonly used by Vis-NIR spectroscopists to mitigate the scattering effect, we put forward an optical method, based on Polarized Light Spectroscopy to improve the absorbance signal measurement on highly scattering samples. This method selects part of the signal which is less impacted by scattering. The resulted signal is combined in the Absorption/Remission function defined in Dahm's Representative Layer Theory to compute an absorbance signal fulfilling Beer-Lambert's law, i.e. being linearly related to concentration of the chemicals composing the sample. The underpinning theories have been experimentally evaluated on scattering samples in liquid form and in powdered form. The method produced more accurate spectra and the Pearson's coefficient assessing the linearity between the absorbance spectra and the concentration of the added dye improved from 0.94 to 0.99 for liquid samples and 0.84-0.97 for powdered samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Toward the improvement in fetal monitoring during labor with the inclusion of maternal heart rate analysis.

    PubMed

    Gonçalves, Hernâni; Pinto, Paula; Silva, Manuela; Ayres-de-Campos, Diogo; Bernardes, João

    2016-04-01

    Fetal heart rate (FHR) monitoring is used routinely in labor, but conventional methods have a limited capacity to detect fetal hypoxia/acidosis. An exploratory study was performed on the simultaneous assessment of maternal heart rate (MHR) and FHR variability, to evaluate their evolution during labor and their capacity to detect newborn acidemia. MHR and FHR were simultaneously recorded in 51 singleton term pregnancies during the last two hours of labor and compared with newborn umbilical artery blood (UAB) pH. Linear/nonlinear indices were computed separately for MHR and FHR. Interaction between MHR and FHR was quantified through the same indices on FHR-MHR and through their correlation and cross-entropy. Univariate and bivariate statistical analysis included nonparametric confidence intervals and statistical tests, receiver operating characteristic curves and linear discriminant analysis. Progression of labor was associated with a significant increase in most MHR and FHR linear indices, whereas entropy indices decreased. FHR alone and in combination with MHR as FHR-MHR evidenced the highest auROC values for prediction of fetal acidemia, with 0.76 and 0.88 for the UAB pH thresholds 7.20 and 7.15, respectively. The inclusion of MHR on bivariate analysis achieved sensitivity and specificity values of nearly 100 and 89.1%, respectively. These results suggest that simultaneous analysis of MHR and FHR may improve the identification of fetal acidemia compared with FHR alone, namely during the last hour of labor.

  20. Models for electricity market efficiency and bidding strategy analysis

    NASA Astrophysics Data System (ADS)

    Niu, Hui

    This dissertation studies models for the analysis of market efficiency and bidding behaviors of market participants in electricity markets. Simulation models are developed to estimate how transmission and operational constraints affect the competitive benchmark and market prices based on submitted bids. This research contributes to the literature in three aspects. First, transmission and operational constraints, which have been neglected in most empirical literature, are considered in the competitive benchmark estimation model. Second, the effects of operational and transmission constraints on market prices are estimated through two models based on the submitted bids of market participants. Third, these models are applied to analyze the efficiency of the Electric Reliability Council Of Texas (ERCOT) real-time energy market by simulating its operations for the time period from January 2002 to April 2003. The characteristics and available information for the ERCOT market are considered. In electricity markets, electric firms compete through both spot market bidding and bilateral contract trading. A linear asymmetric supply function equilibrium (SFE) model with transmission constraints is proposed in this dissertation to analyze the bidding strategies with forward contracts. The research contributes to the literature in several aspects. First, we combine forward contracts, transmission constraints, and multi-period strategy (an obligation for firms to bid consistently over an extended time horizon such as a day or an hour) into the linear asymmetric supply function equilibrium framework. As an ex-ante model, it can provide qualitative insights into firms' behaviors. Second, the bidding strategies related to Transmission Congestion Rights (TCRs) are discussed by interpreting TCRs as linear combination of forwards. Third, the model is a general one in the sense that there is no limitation on the number of firms and scale of the transmission network, which can have asymmetric linear marginal cost structures. In addition to theoretical analysis, we apply our model to simulate the ERCOT real-time market from January 2002 to April 2003. The effects of forward contracts on the ERCOT market are evaluated through the results. It is shown that the model is able to capture features of bidding behavior in the market.

  1. A refinement of the combination equations for evaporation

    USGS Publications Warehouse

    Milly, P.C.D.

    1991-01-01

    Most combination equations for evaporation rely on a linear expansion of the saturation vapor-pressure curve around the air temperature. Because the temperature at the surface may differ from this temperature by several degrees, and because the saturation vapor-pressure curve is nonlinear, this approximation leads to a certain degree of error in those evaporation equations. It is possible, however, to introduce higher-order polynomial approximations for the saturation vapor-pressure curve and to derive a family of explicit equations for evaporation, having any desired degree of accuracy. Under the linear approximation, the new family of equations for evaporation reduces, in particular cases, to the combination equations of H. L. Penman (Natural evaporation from open water, bare soil and grass, Proc. R. Soc. London, Ser. A193, 120-145, 1948) and of subsequent workers. Comparison of the linear and quadratic approximations leads to a simple approximate expression for the error associated with the linear case. Equations based on the conventional linear approximation consistently underestimate evaporation, sometimes by a substantial amount. ?? 1991 Kluwer Academic Publishers.

  2. Protein linear indices of the 'macromolecular pseudograph alpha-carbon atom adjacency matrix' in bioinformatics. Part 1: prediction of protein stability effects of a complete set of alanine substitutions in Arc repressor.

    PubMed

    Marrero-Ponce, Yovani; Medina-Marrero, Ricardo; Castillo-Garit, Juan A; Romero-Zaldivar, Vicente; Torrens, Francisco; Castro, Eduardo A

    2005-04-15

    A novel approach to bio-macromolecular design from a linear algebra point of view is introduced. A protein's total (whole protein) and local (one or more amino acid) linear indices are a new set of bio-macromolecular descriptors of relevance to protein QSAR/QSPR studies. These amino-acid level biochemical descriptors are based on the calculation of linear maps on Rn[f k(xmi):Rn-->Rn] in canonical basis. These bio-macromolecular indices are calculated from the kth power of the macromolecular pseudograph alpha-carbon atom adjacency matrix. Total linear indices are linear functional on Rn. That is, the kth total linear indices are linear maps from Rn to the scalar R[f k(xm):Rn-->R]. Thus, the kth total linear indices are calculated by summing the amino-acid linear indices of all amino acids in the protein molecule. A study of the protein stability effects for a complete set of alanine substitutions in the Arc repressor illustrates this approach. A quantitative model that discriminates near wild-type stability alanine mutants from the reduced-stability ones in a training series was obtained. This model permitted the correct classification of 97.56% (40/41) and 91.67% (11/12) of proteins in the training and test set, respectively. It shows a high Matthews correlation coefficient (MCC=0.952) for the training set and an MCC=0.837 for the external prediction set. Additionally, canonical regression analysis corroborated the statistical quality of the classification model (Rcanc=0.824). This analysis was also used to compute biological stability canonical scores for each Arc alanine mutant. On the other hand, the linear piecewise regression model compared favorably with respect to the linear regression one on predicting the melting temperature (tm) of the Arc alanine mutants. The linear model explains almost 81% of the variance of the experimental tm (R=0.90 and s=4.29) and the LOO press statistics evidenced its predictive ability (q2=0.72 and scv=4.79). Moreover, the TOMOCOMD-CAMPS method produced a linear piecewise regression (R=0.97) between protein backbone descriptors and tm values for alanine mutants of the Arc repressor. A break-point value of 51.87 degrees C characterized two mutant clusters and coincided perfectly with the experimental scale. For this reason, we can use the linear discriminant analysis and piecewise models in combination to classify and predict the stability of the mutant Arc homodimers. These models also permitted the interpretation of the driving forces of such folding process, indicating that topologic/topographic protein backbone interactions control the stability profile of wild-type Arc and its alanine mutants.

  3. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    PubMed Central

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  4. Simultaneous determination of penicillin G salts by infrared spectroscopy: Evaluation of combining orthogonal signal correction with radial basis function-partial least squares regression

    NASA Astrophysics Data System (ADS)

    Talebpour, Zahra; Tavallaie, Roya; Ahmadi, Seyyed Hamid; Abdollahpour, Assem

    2010-09-01

    In this study, a new method for the simultaneous determination of penicillin G salts in pharmaceutical mixture via FT-IR spectroscopy combined with chemometrics was investigated. The mixture of penicillin G salts is a complex system due to similar analytical characteristics of components. Partial least squares (PLS) and radial basis function-partial least squares (RBF-PLS) were used to develop the linear and nonlinear relation between spectra and components, respectively. The orthogonal signal correction (OSC) preprocessing method was used to correct unexpected information, such as spectral overlapping and scattering effects. In order to compare the influence of OSC on PLS and RBF-PLS models, the optimal linear (PLS) and nonlinear (RBF-PLS) models based on conventional and OSC preprocessed spectra were established and compared. The obtained results demonstrated that OSC clearly enhanced the performance of both RBF-PLS and PLS calibration models. Also in the case of some nonlinear relation between spectra and component, OSC-RBF-PLS gave satisfactory results than OSC-PLS model which indicated that the OSC was helpful to remove extrinsic deviations from linearity without elimination of nonlinear information related to component. The chemometric models were tested on an external dataset and finally applied to the analysis commercialized injection product of penicillin G salts.

  5. Galerkin finite difference Laplacian operators on isolated unstructured triangular meshes by linear combinations

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.

    1990-01-01

    The Galerkin weighted residual technique using linear triangular weight functions is employed to develop finite difference formulae in Cartesian coordinates for the Laplacian operator on isolated unstructured triangular grids. The weighted residual coefficients associated with the weak formulation of the Laplacian operator along with linear combinations of the residual equations are used to develop the algorithm. The algorithm was tested for a wide variety of unstructured meshes and found to give satisfactory results.

  6. Optimization benefits analysis in production process of fabrication components

    NASA Astrophysics Data System (ADS)

    Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.

    2017-12-01

    The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.

  7. Using linear programming to minimize the cost of nurse personnel.

    PubMed

    Matthews, Charles H

    2005-01-01

    Nursing personnel costs make up a major portion of most hospital budgets. This report evaluates and optimizes the utility of the nurse personnel at the Internal Medicine Outpatient Clinic of Wake Forest University Baptist Medical Center. Linear programming (LP) was employed to determine the effective combination of nurses that would allow for all weekly clinic tasks to be covered while providing the lowest possible cost to the department. Linear programming is a standard application of standard spreadsheet software that allows the operator to establish the variables to be optimized and then requires the operator to enter a series of constraints that will each have an impact on the ultimate outcome. The application is therefore able to quantify and stratify the nurses necessary to execute the tasks. With the report, a specific sensitivity analysis can be performed to assess just how sensitive the outcome is to the stress of adding or deleting a nurse to or from the payroll. The nurse employee cost structure in this study consisted of five certified nurse assistants (CNA), three licensed practicing nurses (LPN), and five registered nurses (RN). The LP revealed that the outpatient clinic should staff four RNs, three LPNs, and four CNAs with 95 percent confidence of covering nurse demand on the floor. This combination of nurses would enable the clinic to: 1. Reduce annual staffing costs by 16 percent; 2. Force each level of nurse to be optimally productive by focusing on tasks specific to their expertise; 3. Assign accountability more efficiently as the nurses adhere to their specific duties; and 4. Ultimately provide a competitive advantage to the clinic as it relates to nurse employee and patient satisfaction. Linear programming can be used to solve capacity problems for just about any staffing situation, provided the model is indeed linear.

  8. UFVA, A Combined Linear and Nonlinear Factor Analysis Program Package for Chemical Data Evaluation.

    DTIC Science & Technology

    1980-11-01

    that one cluster consists of the monoterpenes and Isoprene; the second is of the sesquiterpenes. Compound 8 (Caryophyllene) should therefore belong to...two clusters very clearly (Fig. 6). Figure 6 The very similar fragmentation pattern of Isoprene and the monoterpenes is reflected by their close...13 of another set of 13 terpene components. These are Isoprene, four monoterpenes (Myrcene, Menthol, Camphene, Umbellulone), four sesquiterpenes

  9. Theoretical relationship between vibration transmissibility and driving-point response functions of the human body.

    PubMed

    Dong, Ren G; Welcome, Daniel E; McDowell, Thomas W; Wu, John Z

    2013-11-25

    The relationship between the vibration transmissibility and driving-point response functions (DPRFs) of the human body is important for understanding vibration exposures of the system and for developing valid models. This study identified their theoretical relationship and demonstrated that the sum of the DPRFs can be expressed as a linear combination of the transmissibility functions of the individual mass elements distributed throughout the system. The relationship is verified using several human vibration models. This study also clarified the requirements for reliably quantifying transmissibility values used as references for calibrating the system models. As an example application, this study used the developed theory to perform a preliminary analysis of the method for calibrating models using both vibration transmissibility and DPRFs. The results of the analysis show that the combined method can theoretically result in a unique and valid solution of the model parameters, at least for linear systems. However, the validation of the method itself does not guarantee the validation of the calibrated model, because the validation of the calibration also depends on the model structure and the reliability and appropriate representation of the reference functions. The basic theory developed in this study is also applicable to the vibration analyses of other structures.

  10. Characterization of the lateral distribution of fluorescent lipid in binary-constituent lipid monolayers by principal component analysis.

    PubMed

    Sugár, István P; Zhai, Xiuhong; Boldyrev, Ivan A; Molotkovsky, Julian G; Brockman, Howard L; Brown, Rhoderick E

    2010-01-01

    Lipid lateral organization in binary-constituent monolayers consisting of fluorescent and nonfluorescent lipids has been investigated by acquiring multiple emission spectra during measurement of each force-area isotherm. The emission spectra reflect BODIPY-labeled lipid surface concentration and lateral mixing with different nonfluorescent lipid species. Using principal component analysis (PCA) each spectrum could be approximated as the linear combination of only two principal vectors. One point on a plane could be associated with each spectrum, where the coordinates of the point are the coefficients of the linear combination. Points belonging to the same lipid constituents and experimental conditions form a curve on the plane, where each point belongs to a different mole fraction. The location and shape of the curve reflects the lateral organization of the fluorescent lipid mixed with a specific nonfluorescent lipid. The method provides massive data compression that preserves and emphasizes key information pertaining to lipid distribution in different lipid monolayer phases. Collectively, the capacity of PCA for handling large spectral data sets, the nanoscale resolution afforded by the fluorescence signal, and the inherent versatility of monolayers for characterization of lipid lateral interactions enable significantly enhanced resolution of lipid lateral organizational changes induced by different lipid compositions.

  11. A data-driven approach for evaluating multi-modal therapy in traumatic brain injury

    PubMed Central

    Haefeli, Jenny; Ferguson, Adam R.; Bingham, Deborah; Orr, Adrienne; Won, Seok Joon; Lam, Tina I.; Shi, Jian; Hawley, Sarah; Liu, Jialing; Swanson, Raymond A.; Massa, Stephen M.

    2017-01-01

    Combination therapies targeting multiple recovery mechanisms have the potential for additive or synergistic effects, but experimental design and analyses of multimodal therapeutic trials are challenging. To address this problem, we developed a data-driven approach to integrate and analyze raw source data from separate pre-clinical studies and evaluated interactions between four treatments following traumatic brain injury. Histologic and behavioral outcomes were measured in 202 rats treated with combinations of an anti-inflammatory agent (minocycline), a neurotrophic agent (LM11A-31), and physical therapy consisting of assisted exercise with or without botulinum toxin-induced limb constraint. Data was curated and analyzed in a linked workflow involving non-linear principal component analysis followed by hypothesis testing with a linear mixed model. Results revealed significant benefits of the neurotrophic agent LM11A-31 on learning and memory outcomes after traumatic brain injury. In addition, modulations of LM11A-31 effects by co-administration of minocycline and by the type of physical therapy applied reached statistical significance. These results suggest a combinatorial effect of drug and physical therapy interventions that was not evident by univariate analysis. The study designs and analytic techniques applied here form a structured, unbiased, internally validated workflow that may be applied to other combinatorial studies, both in animals and humans. PMID:28205533

  12. A data-driven approach for evaluating multi-modal therapy in traumatic brain injury.

    PubMed

    Haefeli, Jenny; Ferguson, Adam R; Bingham, Deborah; Orr, Adrienne; Won, Seok Joon; Lam, Tina I; Shi, Jian; Hawley, Sarah; Liu, Jialing; Swanson, Raymond A; Massa, Stephen M

    2017-02-16

    Combination therapies targeting multiple recovery mechanisms have the potential for additive or synergistic effects, but experimental design and analyses of multimodal therapeutic trials are challenging. To address this problem, we developed a data-driven approach to integrate and analyze raw source data from separate pre-clinical studies and evaluated interactions between four treatments following traumatic brain injury. Histologic and behavioral outcomes were measured in 202 rats treated with combinations of an anti-inflammatory agent (minocycline), a neurotrophic agent (LM11A-31), and physical therapy consisting of assisted exercise with or without botulinum toxin-induced limb constraint. Data was curated and analyzed in a linked workflow involving non-linear principal component analysis followed by hypothesis testing with a linear mixed model. Results revealed significant benefits of the neurotrophic agent LM11A-31 on learning and memory outcomes after traumatic brain injury. In addition, modulations of LM11A-31 effects by co-administration of minocycline and by the type of physical therapy applied reached statistical significance. These results suggest a combinatorial effect of drug and physical therapy interventions that was not evident by univariate analysis. The study designs and analytic techniques applied here form a structured, unbiased, internally validated workflow that may be applied to other combinatorial studies, both in animals and humans.

  13. A Comparison of Multivariable Control Design Techniques for a Turbofan Engine Control

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Watts, Stephen R.

    1995-01-01

    This paper compares two previously published design procedures for two different multivariable control design techniques for application to a linear engine model of a jet engine. The two multivariable control design techniques compared were the Linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR) and the H-Infinity synthesis. The two control design techniques were used with specific previously published design procedures to synthesize controls which would provide equivalent closed loop frequency response for the primary control loops while assuring adequate loop decoupling. The resulting controllers were then reduced in order to minimize the programming and data storage requirements for a typical implementation. The reduced order linear controllers designed by each method were combined with the linear model of an advanced turbofan engine and the system performance was evaluated for the continuous linear system. Included in the performance analysis are the resulting frequency and transient responses as well as actuator usage and rate capability for each design method. The controls were also analyzed for robustness with respect to structured uncertainties in the unmodeled system dynamics. The two controls were then compared for performance capability and hardware implementation issues.

  14. Higher parity is associated with increased risk of Type 2 diabetes mellitus in women: A linear dose-response meta-analysis of cohort studies.

    PubMed

    Guo, Peng; Zhou, Quan; Ren, Lei; Chen, Yu; Hui, Yue

    2017-01-01

    The goal of this study is to investigate the association between higher parity and the risk of occurrence of type 2 diabetes mellitus (T2DM) in women and to quantify the potential dose-response relation. We searched MEDLINE, and EMBASE electronic databases for related cohort studies up to March 10th, 2016. Summary rate ratios (RRs) and 95% confidence intervals (CIs) for T2DM with at least 3 categories of exposure were eligible. A random-effects dose-response analysis procedure was used to study the relations between them. After screening a total of 13,647 published studies, only 7 cohort studies (9,394 incident cases and 286,840 female participants) were found to be eligible for this meta-analysis. In the category analysis, the pooled RR for the highest number of parity vs. the lowest one was 1.42 (95% CI: 1.17-1.72, I 2 =71.5%, P heterogeneity =0.002, Power=0.99). In the dose-response analysis, a noticeable linear dose-risk relation was found between parity and T2DM (P for nonlinearity test =0.942). For every live birth increase in parity, the combined RR was 1.06 (95% CI: 1.02-1.09, I 2 =84.3%, P heterogeneity =0.003, Power=0.99). Subgroup and sensitivity analyses yielded similar results. No publication bias was found in the results. This meta-analysis suggests that higher parity and the risk of T2DM show a linear relationship in women. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    PubMed

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.

  16. Reliable Quantitative Mineral Abundances of the Martian Surface using THEMIS

    NASA Astrophysics Data System (ADS)

    Smith, R. J.; Huang, J.; Ryan, A. J.; Christensen, P. R.

    2013-12-01

    The following presents a proof of concept that given quality data, Thermal Emission Imaging System (THEMIS) data can be used to derive reliable quantitative mineral abundances of the Martian surface using a limited mineral library. The THEMIS instrument aboard the Mars Odyssey spacecraft is a multispectral thermal infrared imager with a spatial resolution of 100 m/pixel. The relatively high spatial resolution along with global coverage makes THEMIS datasets powerful tools for comprehensive fine scale petrologic analyses. However, the spectral resolution of THEMIS is limited to 8 surface sensitive bands between 6.8 and 14.0 μm with an average bandwidth of ~ 1 μm, which complicates atmosphere-surface separation and spectral analysis. This study utilizes the atmospheric correction methods of both Bandfield et al. [2004] and Ryan et al. [2013] joined with the iterative linear deconvolution technique pioneered by Huang et al. [in review] in order to derive fine-scale quantitative mineral abundances of the Martian surface. In general, it can be assumed that surface emissivity combines in a linear fashion in the thermal infrared (TIR) wavelengths such that the emitted energy is proportional to the areal percentage of the minerals present. TIR spectra are unmixed using a set of linear equations involving an endmember library of lab measured mineral spectra. The number of endmembers allowed in a spectral library are restricted to a quantity of n-1 (where n = the number of spectral bands of an instrument), preserving one band for blackbody. Spectral analysis of THEMIS data is thus allowed only seven endmembers. This study attempts to prove that this limitation does not prohibit the derivation of meaningful spectral analyses from THEMIS data. Our study selects THEMIS stamps from a region of Mars that is well characterized in the TIR by the higher spectral resolution, lower spatial resolution Thermal Emission Spectrometer (TES) instrument (143 bands at 10 cm-1 sampling and 3x5 km pixel). Multiple atmospheric corrections are performed for one image using the methods of Bandfield et al. [2004] and Ryan et al. [2013]. 7x7 pixel areas were selected, averaged, and compared using each atmospherically corrected image to ensure consistency. Corrections that provided reliable data were then used for spectral analyses. Linear deconvolution is performed using an iterative spectral analysis method [Huang et al. in review] that takes an endmember spectral library, and creates mineral combinations based on prescribed mineral group selections. The script then performs a spectral mixture analysis on each surface spectrum using all possible mineral combinations, and reports the best modeled fit to the measured spectrum. Here we present initial results from Syrtis Planum where multiple atmospherically corrected THEMIS images were deconvolved to produce similar spectral analysis results, within the detection limit of the instrument. THEMIS mineral abundances are comparable to TES-derived abundances. References: Bandfield, JL et al. [2004], JGR 109, E10008 Huang, J et al., JGR, in review Ryan, AJ et al. [2013], AGU Fall Meeting

  17. Optical solitons, explicit solutions and modulation instability analysis with second-order spatio-temporal dispersion

    NASA Astrophysics Data System (ADS)

    Inc, Mustafa; Isa Aliyu, Aliyu; Yusuf, Abdullahi; Baleanu, Dumitru

    2017-12-01

    This paper obtains the dark, bright, dark-bright or combined optical and singular solitons to the nonlinear Schrödinger equation (NLSE) with group velocity dispersion coefficient and second-order spatio-temporal dispersion coefficient, which arises in photonics and waveguide optics and in optical fibers. The integration algorithm is the sine-Gordon equation method (SGEM). Furthermore, the explicit solutions of the equation are derived by considering the power series solutions (PSS) theory and the convergence of the solutions is guaranteed. Lastly, the modulation instability analysis (MI) is studied based on the standard linear-stability analysis and the MI gain spectrum is obtained.

  18. Electrophysiological correlates of figure-ground segregation directly reflect perceptual saliency.

    PubMed

    Straube, Sirko; Grimsen, Cathleen; Fahle, Manfred

    2010-03-05

    In a figure identification task, we investigated the influence of different visual cue configurations (spatial frequency, orientation or a combination of both) on the human EEG. Combining psychophysics with ERP and time-frequency analysis, we show that the neural response at about 200ms reflects perceptual saliency rather than physical cue contrast. Increasing saliency caused (i) a negative shift of the posterior P2 coinciding with a power decrease in the posterior theta-band and (ii) an amplitude and latency increase of the posterior P3. We demonstrate that visual cues interact for a percept that is non-linearly related to the physical figure-ground properties.

  19. Simulation of herbicide degradation in different soils by use of Pedo-transfer functions (PTF) and non-linear kinetics.

    PubMed

    von Götz, N; Richter, O

    1999-03-01

    The degradation behaviour of bentazone in 14 different soils was examined at constant temperature and moisture conditions. Two soils were examined at different temperatures. On the basis of these data the influence of soil properties and temperature on degradation was assessed and modelled. Pedo-transfer functions (PTF) in combination with a linear and a non-linear model were found suitable to describe the bentazone degradation in the laboratory as related to soil properties. The linear PTF can be combined with a rate related to the temperature to account for both soil property and temperature influence at the same time.

  20. Quantum processing by remote quantum control

    NASA Astrophysics Data System (ADS)

    Qiang, Xiaogang; Zhou, Xiaoqi; Aungskunsiri, Kanin; Cable, Hugo; O'Brien, Jeremy L.

    2017-12-01

    Client-server models enable computations to be hosted remotely on quantum servers. We present a novel protocol for realizing this task, with practical advantages when using technology feasible in the near term. Client tasks are realized as linear combinations of operations implemented by the server, where the linear coefficients are hidden from the server. We report on an experimental demonstration of our protocol using linear optics, which realizes linear combination of two single-qubit operations by a remote single-qubit control. In addition, we explain when our protocol can remain efficient for larger computations, as well as some ways in which privacy can be maintained using our protocol.

  1. Combining multiple imputation and meta-analysis with individual participant data

    PubMed Central

    Burgess, Stephen; White, Ian R; Resche-Rigon, Matthieu; Wood, Angela M

    2013-01-01

    Multiple imputation is a strategy for the analysis of incomplete data such that the impact of the missingness on the power and bias of estimates is mitigated. When data from multiple studies are collated, we can propose both within-study and multilevel imputation models to impute missing data on covariates. It is not clear how to choose between imputation models or how to combine imputation and inverse-variance weighted meta-analysis methods. This is especially important as often different studies measure data on different variables, meaning that we may need to impute data on a variable which is systematically missing in a particular study. In this paper, we consider a simulation analysis of sporadically missing data in a single covariate with a linear analysis model and discuss how the results would be applicable to the case of systematically missing data. We find in this context that ensuring the congeniality of the imputation and analysis models is important to give correct standard errors and confidence intervals. For example, if the analysis model allows between-study heterogeneity of a parameter, then we should incorporate this heterogeneity into the imputation model to maintain the congeniality of the two models. In an inverse-variance weighted meta-analysis, we should impute missing data and apply Rubin's rules at the study level prior to meta-analysis, rather than meta-analyzing each of the multiple imputations and then combining the meta-analysis estimates using Rubin's rules. We illustrate the results using data from the Emerging Risk Factors Collaboration. PMID:23703895

  2. Differential Entropy Preserves Variational Information of Near-Infrared Spectroscopy Time Series Associated With Working Memory.

    PubMed

    Keshmiri, Soheil; Sumioka, Hidenubo; Yamazaki, Ryuji; Ishiguro, Hiroshi

    2018-01-01

    Neuroscience research shows a growing interest in the application of Near-Infrared Spectroscopy (NIRS) in analysis and decoding of the brain activity of human subjects. Given the correlation that is observed between the Blood Oxygen Dependent Level (BOLD) responses that are exhibited by the time series data of functional Magnetic Resonance Imaging (fMRI) and the hemoglobin oxy/deoxy-genation that is captured by NIRS, linear models play a central role in these applications. This, in turn, results in adaptation of the feature extraction strategies that are well-suited for discretization of data that exhibit a high degree of linearity, namely, slope and the mean as well as their combination, to summarize the informational contents of the NIRS time series. In this article, we demonstrate that these features are inefficient in capturing the variational information of NIRS data, limiting the reliability and the adequacy of the conclusion on their results. Alternatively, we propose the linear estimate of differential entropy of these time series as a natural representation of such information. We provide evidence for our claim through comparative analysis of the application of these features on NIRS data pertinent to several working memory tasks as well as naturalistic conversational stimuli.

  3. Predictive models reduce talent development costs in female gymnastics.

    PubMed

    Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle

    2017-04-01

    This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.

  4. Brain-heart linear and nonlinear dynamics during visual emotional elicitation in healthy subjects.

    PubMed

    Valenza, G; Greco, A; Gentili, C; Lanata, A; Toschi, N; Barbieri, R; Sebastiani, L; Menicucci, D; Gemignani, A; Scilingo, E P

    2016-08-01

    This study investigates brain-heart dynamics during visual emotional elicitation in healthy subjects through linear and nonlinear coupling measures of EEG spectrogram and instantaneous heart rate estimates. To this extent, affective pictures including different combinations of arousal and valence levels, gathered from the International Affective Picture System, were administered to twenty-two healthy subjects. Time-varying maps of cortical activation were obtained through EEG spectral analysis, whereas the associated instantaneous heartbeat dynamics was estimated using inhomogeneous point-process linear models. Brain-Heart linear and nonlinear coupling was estimated through the Maximal Information Coefficient (MIC), considering EEG time-varying spectra and point-process estimates defined in the time and frequency domains. As a proof of concept, we here show preliminary results considering EEG oscillations in the θ band (4-8 Hz). This band, indeed, is known in the literature to be involved in emotional processes. MIC highlighted significant arousal-dependent changes, mediated by the prefrontal cortex interplay especially occurring at intermediate arousing levels. Furthermore, lower and higher arousing elicitations were associated to not significant brain-heart coupling changes in response to pleasant/unpleasant elicitations.

  5. A way around the Nyquist lag

    NASA Astrophysics Data System (ADS)

    Penland, C.

    2017-12-01

    One way to test for the linearity of a multivariate system is to perform Linear Inverse Modeling (LIM) to a multivariate time series. LIM yields an estimated operator by combining a lagged covariance matrix with the contemporaneous covariance matrix. If the underlying dynamics is linear, the resulting dynamical description should not depend on the particular lag at which the lagged covariance matrix is estimated. This test is known as the "tau test." The tau test will be severely compromised if the lag at which the analysis is performed is approximately half the period of an internal oscillation frequency. In this case, the tau test will fail even though the dynamics are actually linear. Thus, until now, the tau test has only been possible for lags smaller than this "Nyquist lag." In this poster, we investigate the use of Hilbert transforms as a way to avoid the problems associated with Nyquist lags. By augmenting the data with dimensions orthogonal to those spanning the original system, information that would be inaccessible to LIM in its original form may be sampled.

  6. Global GNSS processing based on the raw observation approach

    NASA Astrophysics Data System (ADS)

    Strasser, Sebastian; Zehentner, Norbert; Mayer-Gürr, Torsten

    2017-04-01

    Many global navigation satellite system (GNSS) applications, e.g. Precise Point Positioning (PPP), require high-quality GNSS products, such as precise GNSS satellite orbits and clocks. These products are routinely determined by analysis centers of the International GNSS Service (IGS). The current processing methods of the analysis centers make use of the ionosphere-free linear combination to reduce the ionospheric influence. Some of the analysis centers also form observation differences, in general double-differences, to eliminate several additional error sources. The raw observation approach is a new GNSS processing approach that was developed at Graz University of Technology for kinematic orbit determination of low Earth orbit (LEO) satellites and subsequently adapted to global GNSS processing in general. This new approach offers some benefits compared to well-established approaches, such as a straightforward incorporation of new observables due to the avoidance of observation differences and linear combinations. This becomes especially important in view of the changing GNSS landscape with two new systems, the European system Galileo and the Chinese system BeiDou, currently in deployment. GNSS products generated at Graz University of Technology using the raw observation approach currently comprise precise GNSS satellite orbits and clocks, station positions and clocks, code and phase biases, and Earth rotation parameters. To evaluate the new approach, products generated using the Global Positioning System (GPS) constellation and observations from the global IGS station network are compared to those of the IGS analysis centers. The comparisons show that the products generated at Graz University of Technology are on a similar level of quality to the products determined by the IGS analysis centers. This confirms that the raw observation approach is applicable to global GNSS processing. Some areas requiring further work have been identified, enabling future improvements of the method.

  7. Synergistic effects of arsenic trioxide combined with ascorbic acid in human osteosarcoma MG-63 cells: a systems biology analysis.

    PubMed

    Huang, X C; Maimaiti, X Y M; Huang, C W; Zhang, L; Li, Z B; Chen, Z G; Gao, X; Chen, T Y

    2014-01-01

    To further understand the synergistic mechanism of As2O3 and asscorbic acid (AA) in human osteosarcoma MG-63 cells by systems biology analysis. Human osteosarcoma MG-63 cells were treated by As2O3 (1 µmol/L), AA (62.5 µmol/L) and combined drugs (1 µmol/L As2O3 plus 62.5 µmol/L AA). Dynamic morphological characteristics were recorded by Cell-IQ system, and growth rate was calculated. Illumina beadchip assay was used to analyze the differential expression genes in different groups. Synergic effects on differential expression genes (DEGs) were analyzed by mixture linear model and singular value decomposition model. KEGG pathway annotations and GO enrichment analysis were performed to figure out the pathways involved in the synergic effects. We captured 1987 differential expression genes in combined therapy MG-63 cells. FAT1 gene was significantly upregulated in all three groups, which is a promising drug target as an important tumor suppressor analogue; meanwhile, HIST1H2BD gene was markedly downregulated in the As2O3 monotherapy group and the combined therapy group, which was found to be upregulated in prostatic cancer. These two genes might play critical roles in synergetic effects of AA and As2O3, although the exact mechanism needs further investigation. KEGG pathway analysis showed many DEGs were related with tight junction, and GO analysis also indicated that DEGs in the combined therapy cells gathered in occluding junction, apical junction complex, cell junction, and tight junction. AA potentiates the efficacy of As2O3 in MG-63 cells. Systems biology analysis showed the synergic effect on the DEGs.

  8. Boxing and mixed martial arts: preliminary traumatic neuromechanical injury risk analyses from laboratory impact dosage data.

    PubMed

    Bartsch, Adam J; Benzel, Edward C; Miele, Vincent J; Morr, Douglas R; Prakash, Vikas

    2012-05-01

    In spite of ample literature pointing to rotational and combined impact dosage being key contributors to head and neck injury, boxing and mixed martial arts (MMA) padding is still designed to primarily reduce cranium linear acceleration. The objects of this study were to quantify preliminary linear and rotational head impact dosage for selected boxing and MMA padding in response to hook punches; compute theoretical skull, brain, and neck injury risk metrics; and statistically compare the protective effect of various glove and head padding conditions. An instrumented Hybrid III 50th percentile anthropomorphic test device (ATD) was struck in 54 pendulum impacts replicating hook punches at low (27-29 J) and high (54-58 J) energy. Five padding combinations were examined: unpadded (control), MMA glove-unpadded head, boxing glove-unpadded head, unpadded pendulum-boxing headgear, and boxing glove-boxing headgear. A total of 17 injury risk parameters were measured or calculated. All padding conditions reduced linear impact dosage. Other parameters significantly decreased, significantly increased, or were unaffected depending on padding condition. Of real-world conditions (MMA glove-bare head, boxing glove-bare head, and boxing glove-headgear), the boxing glove-headgear condition showed the most meaningful reduction in most of the parameters. In equivalent impacts, the MMA glove-bare head condition induced higher rotational dosage than the boxing glove-bare head condition. Finite element analysis indicated a risk of brain strain injury in spite of significant reduction of linear impact dosage. In the replicated hook punch impacts, all padding conditions reduced linear but not rotational impact dosage. Head and neck dosage theoretically accumulates fastest in MMA and boxing bouts without use of protective headgear. The boxing glove-headgear condition provided the best overall reduction in impact dosage. More work is needed to develop improved protective padding to minimize linear and rotational impact dosage and develop next-generation standards for head and neck injury risk.

  9. Features in visual search combine linearly

    PubMed Central

    Pramod, R. T.; Arun, S. P.

    2014-01-01

    Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328

  10. Computational Modelling and Optimal Control of Ebola Virus Disease with non-Linear Incidence Rate

    NASA Astrophysics Data System (ADS)

    Takaidza, I.; Makinde, O. D.; Okosun, O. K.

    2017-03-01

    The 2014 Ebola outbreak in West Africa has exposed the need to connect modellers and those with relevant data as pivotal to better understanding of how the disease spreads and quantifying the effects of possible interventions. In this paper, we model and analyse the Ebola virus disease with non-linear incidence rate. The epidemic model created is used to describe how the Ebola virus could potentially evolve in a population. We perform an uncertainty analysis of the basic reproductive number R 0 to quantify its sensitivity to other disease-related parameters. We also analyse the sensitivity of the final epidemic size to the time control interventions (education, vaccination, quarantine and safe handling) and provide the cost effective combination of the interventions.

  11. Bayesian analysis of non-linear differential equation models with application to a gut microbial ecosystem.

    PubMed

    Lawson, Daniel J; Holtrop, Grietje; Flint, Harry

    2011-07-01

    Process models specified by non-linear dynamic differential equations contain many parameters, which often must be inferred from a limited amount of data. We discuss a hierarchical Bayesian approach combining data from multiple related experiments in a meaningful way, which permits more powerful inference than treating each experiment as independent. The approach is illustrated with a simulation study and example data from experiments replicating the aspects of the human gut microbial ecosystem. A predictive model is obtained that contains prediction uncertainty caused by uncertainty in the parameters, and we extend the model to capture situations of interest that cannot easily be studied experimentally. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Reduction of a linear complex model for respiratory system during Airflow Interruption.

    PubMed

    Jablonski, Ireneusz; Mroczka, Janusz

    2010-01-01

    The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.

  13. Wavelength selection in injection-driven Hele-Shaw flows: A maximum amplitude criterion

    NASA Astrophysics Data System (ADS)

    Dias, Eduardo; Miranda, Jose

    2013-11-01

    As in most interfacial flow problems, the standard theoretical procedure to establish wavelength selection in the viscous fingering instability is to maximize the linear growth rate. However, there are important discrepancies between previous theoretical predictions and existing experimental data. In this work we perform a linear stability analysis of the radial Hele-Shaw flow system that takes into account the combined action of viscous normal stresses and wetting effects. Most importantly, we introduce an alternative selection criterion for which the selected wavelength is determined by the maximum of the interfacial perturbation amplitude. The effectiveness of such a criterion is substantiated by the significantly improved agreement between theory and experiments. We thank CNPq (Brazilian Sponsor) for financial support.

  14. Graph-based linear scaling electronic structure theory.

    PubMed

    Niklasson, Anders M N; Mniszewski, Susan M; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Rubensson, Emanuel H; Djidjev, Hristo

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  15. Graph-based linear scaling electronic structure theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  16. Accuracy of 1H magnetic resonance spectroscopy for quantification of 2-hydroxyglutarate using linear combination and J-difference editing at 9.4T.

    PubMed

    Neuberger, Ulf; Kickingereder, Philipp; Helluy, Xavier; Fischer, Manuel; Bendszus, Martin; Heiland, Sabine

    2017-12-01

    Non-invasive detection of 2-hydroxyglutarate (2HG) by magnetic resonance spectroscopy is attractive since it is related to tumor metabolism. Here, we compare the detection accuracy of 2HG in a controlled phantom setting via widely used localized spectroscopy sequences quantified by linear combination of metabolite signals vs. a more complex approach applying a J-difference editing technique at 9.4T. Different phantoms, comprised out of a concentration series of 2HG and overlapping brain metabolites, were measured with an optimized point-resolved-spectroscopy sequence (PRESS) and an in-house developed J-difference editing sequence. The acquired spectra were post-processed with LCModel and a simulated metabolite set (PRESS) or with a quantification formula for J-difference editing. Linear regression analysis demonstrated a high correlation of real 2HG values with those measured with the PRESS method (adjusted R-squared: 0.700, p<0.001) as well as with those measured with the J-difference editing method (adjusted R-squared: 0.908, p<0.001). The regression model with the J-difference editing method however had a significantly higher explanatory value over the regression model with the PRESS method (p<0.0001). Moreover, with J-difference editing 2HG was discernible down to 1mM, whereas with the PRESS method 2HG values were not discernable below 2mM and with higher systematic errors, particularly in phantoms with high concentrations of N-acetyl-asparate (NAA) and glutamate (Glu). In summary, quantification of 2HG with linear combination of metabolite signals shows high systematic errors particularly at low 2HG concentration and high concentration of confounding metabolites such as NAA and Glu. In contrast, J-difference editing offers a more accurate quantification even at low 2HG concentrations, which outweighs the downsides of longer measurement time and more complex postprocessing. Copyright © 2017. Published by Elsevier GmbH.

  17. Combined non-parametric and parametric approach for identification of time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz

    2018-03-01

    Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.

  18. Analytical solution of Luedeking-Piret equation for a batch fermentation obeying Monod growth kinetics.

    PubMed

    Garnier, Alain; Gaillet, Bruno

    2015-12-01

    Not so many fermentation mathematical models allow analytical solutions of batch process dynamics. The most widely used is the combination of the logistic microbial growth kinetics with Luedeking-Piret bioproduct synthesis relation. However, the logistic equation is principally based on formalistic similarities and only fits a limited range of fermentation types. In this article, we have developed an analytical solution for the combination of Monod growth kinetics with Luedeking-Piret relation, which can be identified by linear regression and used to simulate batch fermentation evolution. Two classical examples are used to show the quality of fit and the simplicity of the method proposed. A solution for the combination of Haldane substrate-limited growth model combined with Luedeking-Piret relation is also provided. These models could prove useful for the analysis of fermentation data in industry as well as academia. © 2015 Wiley Periodicals, Inc.

  19. The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation

    DTIC Science & Technology

    2015-08-20

    way correlations. For instance, if crime waves are associated with increases in unemployment or drops in police presence, that would be hard to...time lag, ai , bj are parameters in a linear combination, 1, 2 are error terms, and Prepared for Dr. Harold Hawkins US Government Contract...selecting a proper representation for the underlying data. A qualitative comparison of GC and DTW methods on World Bank data indicates that both methods

  20. Parametrically excited multidegree-of-freedom systems with repeated frequencies

    NASA Astrophysics Data System (ADS)

    Nayfeh, A. H.

    1983-05-01

    An analysis is presented of the linear response of multidegree-of-freedom systems with a repeated frequency of order three to a harmonic parametric excitation. The method of multiple scales is used to determine the modulation of the amplitudes and phases for two cases: fundamental resonance of the modes with the repeated frequency and combination resonance involving these modes and another mode. Conditions are then derived for determining the stability of the motion.

  1. Caribou distribution during the post-calving period in relation to infrastructure in the Prudhoe Bay oil field, Alaska

    USGS Publications Warehouse

    Cronin, Matthew A.; Amstrup, Steven C.; Durner, George M.; Noel, Lynn E.; McDonald, Trent L.; Ballard, Warren B.

    1998-01-01

    There is concern that caribou (Rangifer tarandus) may avoid roads and facilities (i.e., infrastructure) in the Prudhoe Bay oil field (PBOF) in northern Alaska, and that this avoidance can have negative effects on the animals. We quantified the relationship between caribou distribution and PBOF infrastructure during the post-calving period (mid-June to mid-August) with aerial surveys from 1990 to 1995. We conducted four to eight surveys per year with complete coverage of the PBOF. We identified active oil field infrastructure and used a geographic information system (GIS) to construct ten 1 km wide concentric intervals surrounding the infrastructure. We tested whether caribou distribution is related to distance from infrastructure with a chi-squared habitat utilization-availability analysis and log-linear regression. We considered bulls, calves, and total caribou of all sex/age classes separately. The habitat utilization-availability analysis indicated there was no consistent trend of attraction to or avoidance of infrastructure. Caribou frequently were more abundant than expected in the intervals close to infrastructure, and this trend was more pronounced for bulls and for total caribou of all sex/age classes than for calves. Log-linear regression (with Poisson error structure) of numbers of caribou and distance from infrastructure were also done, with and without combining data into the 1 km distance intervals. The analysis without intervals revealed no relationship between caribou distribution and distance from oil field infrastructure, or between caribou distribution and Julian date, year, or distance from the Beaufort Sea coast. The log-linear regression with caribou combined into distance intervals showed the density of bulls and total caribou of all sex/age classes declined with distance from infrastructure. Our results indicate that during the post-calving period: 1) caribou distribution is largely unrelated to distance from infrastructure; 2) caribou regularly use habitats in the PBOF; 3) caribou often occur close to infrastructure; and 4) caribou do not appear to avoid oil field infrastructure.

  2. Evaluation of aircraft microwave data for locating zones for well stimulation and enhanced gas recovery. [Arkansas Arkoma Basin

    NASA Technical Reports Server (NTRS)

    Macdonald, H.; Waite, W.; Elachi, C.; Babcock, R.; Konig, R.; Gattis, J.; Borengasser, M.; Tolman, D.

    1980-01-01

    Imaging radar was evaluated as an adjunct to conventional petroleum exploration techniques, especially linear mapping. Linear features were mapped from several remote sensor data sources including stereo photography, enhanced LANDSAT imagery, SLAR radar imagery, enhanced SAR radar imagery, and SAR radar/LANDSAT combinations. Linear feature maps were compared with surface joint data, subsurface and geophysical data, and gas production in the Arkansas part of the Arkoma basin. The best LANDSAT enhanced product for linear detection was found to be a winter scene, band 7, uniform distribution stretch. Of the individual SAR data products, the VH (cross polarized) SAR radar mosaic provides for detection of most linears; however, none of the SAR enhancements is significantly better than the others. Radar/LANDSAT merges may provide better linear detection than a single sensor mapping mode, but because of operator variability, the results are inconclusive. Radar/LANDSAT combinations appear promising as an optimum linear mapping technique, if the advantages and disadvantages of each remote sensor are considered.

  3. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  4. Use of phenyl/tetrazolyl-functionalized magnetic microspheres and stable isotope labeled internal standards for significant reduction of matrix effect in determination of nine fluoroquinolones by liquid chromatography-quadrupole linear ion trap mass spectrometry.

    PubMed

    Xu, Fei; Liu, Feng; Wang, Chaozhan; Wei, Yinmao

    2018-02-01

    In this study, the strategy of unique adsorbent combined with isotope labeled internal standards was used to significantly reduce the matrix effect for the enrichment and analysis of nine fluoroquinolones in a complex sample by liquid chromatography coupled to quadrupole linear ion trap mass spectrometry (LC-QqQ LIT -MS/MS). The adsorbent was prepared conveniently by functionalizing Fe 3 O 4 @SiO 2 microspheres with phenyl and tetrazolyl groups, which could adsorb fluoroquinolones selectively via hydrophobic, electrostatic, and π-π interactions. The established magnetic solid-phase extraction (MSPE) method as well as using stable isotope labeled internal standards in the next MS/MS detection was able to reduce the matrix effect significantly. In the process of LC-QqQ LIT -MS/MS analysis, the precursor and product ions of the analytes were monitored quantitatively and qualitatively on a QTrap system equipped simultaneously with the multiple reaction monitoring (MRM) and enhanced product ion (EPI) scan. Subsequently, the enrichment method combined with LC-QqQ LIT -MS/MS demonstrated good analytical features in terms of linearity (7.5-100.0 ng mL -1 , r > 0.9960), satisfactory recoveries (88.6%-118.3%) with RSDs < 12.0%, LODs = 0.5 μg kg -1 and LOQs = 1.5 μg kg -1 for all tested analytes. Finally, the developed MSPE-LC-QqQ LIT -MS/MS method had been successfully applied to real pork samples for food-safety risk monitoring in Ningxia Province, China. Graphical abstract Mechanism of reducing matrix effect through the as-prepared adsorbent.

  5. Thermal Rayleigh-Marangoni convection in a three-layer liquid-metal-battery model.

    PubMed

    Köllner, Thomas; Boeck, Thomas; Schumacher, Jörg

    2017-05-01

    The combined effects of buoyancy-driven Rayleigh-Bénard convection (RC) and surface tension-driven Marangoni convection (MC) are studied in a triple-layer configuration which serves as a simplified model for a liquid metal battery (LMB). The three-layer model consists of a liquid metal alloy cathode, a molten salt separation layer, and a liquid metal anode at the top. Convection is triggered by the temperature gradient between the hot electrolyte and the colder electrodes, which is a consequence of the release of resistive heat during operation. We present a linear stability analysis of the state of pure thermal conduction in combination with three-dimensional direct numerical simulations of the nonlinear turbulent evolution on the basis of a pseudospectral method. Five different modes of convection are identified in the configuration, which are partly coupled to each other: RC in the upper electrode, RC with internal heating in the molten salt layer, and MC at both interfaces between molten salt and electrode as well as anticonvection in the middle layer and lower electrode. The linear stability analysis confirms that the additional Marangoni effect in the present setup increases the growth rates of the linearly unstable modes, i.e., Marangoni and Rayleigh-Bénard instability act together in the molten salt layer. The critical Grashof and Marangoni numbers decrease with increasing middle layer thickness. The calculated thresholds for the onset of convection are found for realistic current densities of laboratory-sized LMBs. The global turbulent heat transfer follows scaling predictions for internally heated RC. The global turbulent momentum transfer is comparable with turbulent convection in the classical Rayleigh-Bénard case. In summary, our studies show that incorporating Marangoni effects generates smaller flow structures, alters the velocity magnitudes, and enhances the turbulent heat transfer across the triple-layer configuration.

  6. Thermal Rayleigh-Marangoni convection in a three-layer liquid-metal-battery model

    NASA Astrophysics Data System (ADS)

    Köllner, Thomas; Boeck, Thomas; Schumacher, Jörg

    2017-05-01

    The combined effects of buoyancy-driven Rayleigh-Bénard convection (RC) and surface tension-driven Marangoni convection (MC) are studied in a triple-layer configuration which serves as a simplified model for a liquid metal battery (LMB). The three-layer model consists of a liquid metal alloy cathode, a molten salt separation layer, and a liquid metal anode at the top. Convection is triggered by the temperature gradient between the hot electrolyte and the colder electrodes, which is a consequence of the release of resistive heat during operation. We present a linear stability analysis of the state of pure thermal conduction in combination with three-dimensional direct numerical simulations of the nonlinear turbulent evolution on the basis of a pseudospectral method. Five different modes of convection are identified in the configuration, which are partly coupled to each other: RC in the upper electrode, RC with internal heating in the molten salt layer, and MC at both interfaces between molten salt and electrode as well as anticonvection in the middle layer and lower electrode. The linear stability analysis confirms that the additional Marangoni effect in the present setup increases the growth rates of the linearly unstable modes, i.e., Marangoni and Rayleigh-Bénard instability act together in the molten salt layer. The critical Grashof and Marangoni numbers decrease with increasing middle layer thickness. The calculated thresholds for the onset of convection are found for realistic current densities of laboratory-sized LMBs. The global turbulent heat transfer follows scaling predictions for internally heated RC. The global turbulent momentum transfer is comparable with turbulent convection in the classical Rayleigh-Bénard case. In summary, our studies show that incorporating Marangoni effects generates smaller flow structures, alters the velocity magnitudes, and enhances the turbulent heat transfer across the triple-layer configuration.

  7. Reagent for Evaluating Liquid Chromatography-Tandem Mass Spectrometry (LC-MS/MS) Performance in Bottom-Up Proteomic Experiments.

    PubMed

    Beri, Joshua; Rosenblatt, Michael M; Strauss, Ethan; Urh, Marjeta; Bereman, Michael S

    2015-12-01

    We present a novel proteomic standard for assessing liquid chromatography-tandem mass spectrometry (LC-MS/MS) instrument performance, in terms of chromatographic reproducibility and dynamic range within a single LC-MS/MS injection. The peptide mixture standard consists of six peptides that were specifically synthesized to cover a wide range of hydrophobicities (grand average hydropathy (GRAVY) scores of -0.6 to 1.9). A combination of stable isotope labeled amino acids ((13)C and (15)N) were inserted to create five isotopologues. By combining these isotopologues at different ratios, they span four orders of magnitude within each distinct peptide sequence. Each peptide, from lightest to heaviest, increases in abundance by a factor of 10. We evaluate several metrics on our quadrupole orbitrap instrument using the 6 × 5 LC-MS/MS reference mixture spiked into a complex lysate background as a function of dynamic range, including mass measurement accuracy (MMA) and the linear range of quantitation of MS1 and parallel reaction monitoring experiments. Detection and linearity of the instrument routinely spanned three orders of magnitude across the gradient (500 fmol to 0.5 fmol on column) and no systematic trend was observed for MMA of targeted peptides as a function of abundance by analysis of variance analysis (p = 0.17). Detection and linearity of the fifth isotopologue (i.e., 0.05 fmol on column) was dependent on the peptide and instrument scan type (MS1 vs PRM). We foresee that this standard will serve as a powerful method to conduct both intra-instrument performance monitoring/evaluation, technology development, and inter-instrument comparisons.

  8. Findings regarding the relationships between sociodemographic, psychological, comorbidity factors, and functional status, in geriatric inpatients.

    PubMed

    Capisizu, Ana; Aurelian, Sorina; Zamfirescu, Andreea; Omer, Ioana; Haras, Monica; Ciobotaru, Camelia; Onose, Liliana; Spircu, Tiberiu; Onose, Gelu

    2015-01-01

    To assess the impact of socio-demographic and comorbidity factors, and quantified depressive symptoms on disability in inpatients. Observational cross-sectional study, including a number of 80 elderly (16 men, 64 women; mean age 72.48 years; standard deviation 9.95 years) admitted in the Geriatrics Clinic of "St. Luca" Hospital, Bucharest, between May-July, 2012. We used the Functional Independence Measure, Geriatric Depression Scale and an array of socio-demographic and poly-pathology parameters. Statistical analysis included Wilcoxon and Kruskal-Wallis tests for ordinal variables, linear bivariate correlations, general linear model analysis, ANOVA. FIM scores were negatively correlated with age (R=-0.301; 95%CI=-0.439 -0.163; p=0.007); GDS scores had a statistically significant negative correlation (R=-0.322; 95% CI=-0.324 -0.052; p=0.004) with FIM scores. A general linear model, including other variables (gender, age, provenance, matrimonial state, living conditions, education, respectively number of chronic illnesses) as factors, found living conditions (p=0.027) and the combination of matrimonial state and gender (p=0.004) to significantly influence FIM scores. ANOVA showed significant differences in FIM scores stratified by the number of chronic diseases (p=0.035). Our study objectified the negative impact of depression on functional status; interestingly, education had no influence on FIM scores; living conditions and a combination of matrimonial state and gender had an important impact: patients with living spouses showed better functional scores than divorced/widowers; the number of chronic diseases also affected the FIM scores: lower in patients with significant polypathology. These findings should be considered when designing geriatric rehabilitation programs, especially for home--including skilled--cares.

  9. Is strength-training frequency a key factor to develop performance adaptations in young elite soccer players?

    PubMed

    Otero-Esquina, Carlos; de Hoyo Lora, Moisés; Gonzalo-Skok, Óliver; Domínguez-Cobo, Sergio; Sánchez, Hugo

    2017-11-01

    The aim of this study was to analyse the effects of a combined strength-training programme (full-back squat, YoYo TM leg curl, plyometrics and sled towing exercises) on performance in elite young soccer players and to examine the effects when this training programme was performed one or two days per week. Thirty-six male soccer players (U-17 to U-19) were recruited and assigned to experimental groups (EXP1: 1 s w -1 ; EXP2: 2 s w -1 ) or a control group (CON). Performance was assessed through a countermovement jump (CMJ) test (relative peak power [CMJ PP ] and CMJ height [CMJ H ]), a 20-m linear sprint test with split-times at 10-m, and a change of direction test (V-cut test) 1 week before starting the training programme and also 1 week after performing such training programme. Within-group analysis showed substantial improvements in CMJ variables (ES: 0.39-0.81) and COD (ES: 0.70 and 0.76) in EXP1 and EXP2, while EXP2 also showed substantial enhancements in all linear sprinting tests (ES: 0.43-0.52). Between-group analysis showed substantially greater improvements in CMJ variables (ES: 0.39-0.68) in experimental groups in comparison to CON. Furthermore, EXP2 achieved a substantial better performance in 20-m (ES: 0.48-0.64) than EXP1 and CON. Finally, EXP2 also showed greater enhancements in 10-m (ES: 0.50) and V-cut test (ES: 0.52) than EXP1. In conclusion, the combined strength-training programme improved jumping ability, independently of training frequency, though the achievement of two sessions per week also enhanced sprinting abilities (linear and COD) in young soccer players.

  10. Sperm kinematic, head morphometric and kinetic-morphometric subpopulations in the blue fox (Alopex lagopus).

    PubMed

    Soler, Carles; Contell, Jesús; Bori, Lorena; Sancho, María; García-Molina, Almudena; Valverde, Anthony; Segarvall, Jan

    2017-01-01

    This work provides information on the blue fox ejaculated sperm quality needed for seminal dose calculations. Twenty semen samples, obtained by masturbation, were analyzed for kinematic and morphometric parameters by using CASA-Mot and CASA-Morph system and principal component (PC) analysis. For motility, eight kinematic parameters were evaluated, which were reduced to PC1, related to linear variables, and PC2, related to oscillatory movement. The whole population was divided into three independent subpopulations: SP1, fast cells with linear movement; SP2, slow cells and nonoscillatory motility; and SP3, medium speed cells and oscillatory movement. In almost all cases, the subpopulation distribution by animal was significantly different. Head morphology analysis generated four size and four shape parameters, which were reduced to PC1, related to size, and PC2, related to shape of the cells. Three morphometric subpopulations existed: SP1: large oval cells; SP2: medium size elongated cells; and SP3: small and short cells. The subpopulation distribution differed between animals. Combining the kinematic and morphometric datasets produced PC1, related to morphometric parameters, and PC2, related to kinematics, which generated four sperm subpopulations - SP1: high oscillatory motility, large and short heads; SP2: medium velocity with small and short heads; SP3: slow motion small and elongated cells; and SP4: high linear speed and large elongated cells. Subpopulation distribution was different in all animals. The establishment of sperm subpopulations from kinematic, morphometric, and combined variables not only improves the well-defined fox semen characteristics and offers a good conceptual basis for fertility and sperm preservation techniques in this species, but also opens the door to use this approach in other species, included humans.

  11. Sperm kinematic, head morphometric and kinetic-morphometric subpopulations in the blue fox (Alopex lagopus)

    PubMed Central

    Soler, Carles; Contell, Jesús; Bori, Lorena; Sancho, María; García-Molina, Almudena; Valverde, Anthony; Segarvall, Jan

    2017-01-01

    This work provides information on the blue fox ejaculated sperm quality needed for seminal dose calculations. Twenty semen samples, obtained by masturbation, were analyzed for kinematic and morphometric parameters by using CASA-Mot and CASA-Morph system and principal component (PC) analysis. For motility, eight kinematic parameters were evaluated, which were reduced to PC1, related to linear variables, and PC2, related to oscillatory movement. The whole population was divided into three independent subpopulations: SP1, fast cells with linear movement; SP2, slow cells and nonoscillatory motility; and SP3, medium speed cells and oscillatory movement. In almost all cases, the subpopulation distribution by animal was significantly different. Head morphology analysis generated four size and four shape parameters, which were reduced to PC1, related to size, and PC2, related to shape of the cells. Three morphometric subpopulations existed: SP1: large oval cells; SP2: medium size elongated cells; and SP3: small and short cells. The subpopulation distribution differed between animals. Combining the kinematic and morphometric datasets produced PC1, related to morphometric parameters, and PC2, related to kinematics, which generated four sperm subpopulations – SP1: high oscillatory motility, large and short heads; SP2: medium velocity with small and short heads; SP3: slow motion small and elongated cells; and SP4: high linear speed and large elongated cells. Subpopulation distribution was different in all animals. The establishment of sperm subpopulations from kinematic, morphometric, and combined variables not only improves the well-defined fox semen characteristics and offers a good conceptual basis for fertility and sperm preservation techniques in this species, but also opens the door to use this approach in other species, included humans. PMID:27751987

  12. Directional asymmetry of upper limbs in a medieval population from Poland: A combination of linear and geometric morphometrics.

    PubMed

    Kubicka, Anna Maria; Lubiatowski, Przemysław; Długosz, Jan Dawid; Romanowski, Leszek; Piontek, Janusz

    2016-11-01

    Degrees of upper-limb bilateral asymmetry reflect habitual behavior and activity levels throughout life in human populations. The shoulder joint facilitates a wide range of combined motions due to the simultaneous motion of all three bones: clavicle, scapula, and humerus. Accordingly, we used three-dimensional geometric morphometrics to analyze shape differences in the glenoid cavity and linear morphometrics to obtain the degree of directional asymmetry in a medieval population. To calculate directional asymmetry, clavicles, humeri, and scapulae from 100 individuals (50 females, 50 males) were measured. Landmarks and semilandmarks were placed within a three-dimensional reconstruction of the glenoid cavity for analysis of shape differences between sides of the body within sexes. Linear morphometrics showed significant directional asymmetry in both sexes in all bones. Geometric morphometrics revealed significant shape differences of the glenoid cavity between sides of the body in females but not in males. Both indicators of directional asymmetry (%DA and %AA) did not show significant differences between sexes. PLS analysis revealed a significant correlation between glenoid shape and two humeral head diameters only in females on the left side of the body. The studied population, perhaps due to a high level of activity, exhibited slightly greater upper-limb bone bilateral asymmetry than other agricultural populations. Results suggest that the upper limbs were involved in similar activity patterns in both sexes but were characterized by different habitual behaviors. To obtain comprehensive results, studies should be based on sophisticated methods such as geometric morphometrics as well as standard measurements. Am. J. Hum. Biol. 28:817-824, 2016. © 2016Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Fatigue Life Methodology for Tapered Composite Flexbeam Laminates

    NASA Technical Reports Server (NTRS)

    Murri, Gretchen B.; OBrien, T. Kevin; Rousseau, Carl Q.

    1997-01-01

    The viability of a method for determining the fatigue life of composite rotor hub flexbeam laminates using delamination fatigue characterization data and a geometric non-linear finite element (FE) analysis was studied. Combined tension and bending loading was applied to non-linear tapered flexbeam laminates with internal ply drops. These laminates, consisting of coupon specimens cut from a full-size S2/E7T1 glass-epoxy flexbeam were tested in a hydraulic load frame under combined axial-tension and transverse cyclic bending. The magnitude of the axial load remained constant and the direction of the load rotated with the specimen as the cyclic bending load was applied. The first delamination damage observed in the specimens occurred at the area around the tip of the outermost ply-drop group. Subsequently, unstable delamination occurred by complete delamination along the length of the specimen. Continued cycling resulted in multiple delaminations. A 2D finite element model of the flexbeam was developed and a geometrically non-linear analysis was performed. The global responses of the model and test specimens agreed very well in terms of the transverse displacement. The FE model was used to calculate strain energy release rates (G) for delaminations initiating at the tip of the outer ply-drop area and growing toward the thick or thin regions of the flexbeam, as was observed in the specimens. The delamination growth toward the thick region was primarily mode 2, whereas delamination growth toward the thin region was almost completely mode 1. Material characterization data from cyclic double-cantilevered beam tests was used with the peak calculated G values to generate a curve predicting fatigue failure by unstable delamination as a function of the number of loading cycles. The calculated fatigue lives compared well with the test data.

  14. Dual-energy X-ray analysis using synchrotron computed tomography at 35 and 60 keV for the estimation of photon interaction coefficients describing attenuation and energy absorption.

    PubMed

    Midgley, Stewart; Schleich, Nanette

    2015-05-01

    A novel method for dual-energy X-ray analysis (DEXA) is tested using measurements of the X-ray linear attenuation coefficient μ. The key is a mathematical model that describes elemental cross sections using a polynomial in atomic number. The model is combined with the mixture rule to describe μ for materials, using the same polynomial coefficients. Materials are characterized by their electron density Ne and statistical moments Rk describing their distribution of elements, analogous to the concept of effective atomic number. In an experiment with materials of known density and composition, measurements of μ are written as a system of linear simultaneous equations, which is solved for the polynomial coefficients. DEXA itself involves computed tomography (CT) scans at two energies to provide a system of non-linear simultaneous equations that are solved for Ne and the fourth statistical moment R4. Results are presented for phantoms containing dilute salt solutions and for a biological specimen. The experiment identifies 1% systematic errors in the CT measurements, arising from third-harmonic radiation, and 20-30% noise, which is reduced to 3-5% by pre-processing with the median filter and careful choice of reconstruction parameters. DEXA accuracy is quantified for the phantom as the mean absolute differences for Ne and R4: 0.8% and 1.0% for soft tissue and 1.2% and 0.8% for bone-like samples, respectively. The DEXA results for the biological specimen are combined with model coefficients obtained from the tabulations to predict μ and the mass energy absorption coefficient at energies of 10 keV to 20 MeV.

  15. Skeletal height estimation from regression analysis of sternal lengths in a Northwest Indian population of Chandigarh region: a postmortem study.

    PubMed

    Singh, Jagmahender; Pathak, R K; Chavali, Krishnadutt H

    2011-03-20

    Skeletal height estimation from regression analysis of eight sternal lengths in the subjects of Chandigarh zone of Northwest India is the topic of discussion in this study. Analysis of eight sternal lengths (length of manubrium, length of mesosternum, combined length of manubrium and mesosternum, total sternal length and first four intercostals lengths of mesosternum) measured from 252 male and 91 female sternums obtained at postmortems revealed that mean cadaver stature and sternal lengths were more in North Indians and males than the South Indians and females. Except intercostal lengths, all the sternal lengths were positively correlated with stature of the deceased in both sexes (P < 0.001). The multiple regression analysis of sternal lengths was found more useful than the linear regression for stature estimation. Using multivariate regression analysis, the combined length of manubrium and mesosternum in both sexes and the length of manubrium along with 2nd and 3rd intercostal lengths of mesosternum in males were selected as best estimators of stature. Nonetheless, the stature of males can be predicted with SEE of 6.66 (R(2) = 0.16, r = 0.318) from combination of MBL+BL_3+LM+BL_2, and in females from MBL only, it can be estimated with SEE of 6.65 (R(2) = 0.10, r = 0.318), whereas from the multiple regression analysis of pooled data, stature can be known with SEE of 6.97 (R(2) = 0.387, r = 575) from the combination of MBL+LM+BL_2+TSL+BL_3. The R(2) and F-ratio were found to be statistically significant for almost all the variables in both the sexes, except 4th intercostal length in males and 2nd to 4th intercostal lengths in females. The 'major' sternal lengths were more useful than the 'minor' ones for stature estimation The universal regression analysis used by Kanchan et al. [39] when applied to sternal lengths, gave satisfactory estimates of stature for males only but female stature was comparatively better estimated from simple linear regressions. But they are not proposed for the subjects of known sex, as they underestimate the male and overestimate female stature. However, intercostal lengths were found to be the poor estimators of stature (P < 0.05). And also sternal lengths exhibit weaker correlation coefficients and higher standard errors of estimate. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  16. Diffuse Optical Tomography for Brain Imaging: Continuous Wave Instrumentation and Linear Analysis Methods

    NASA Astrophysics Data System (ADS)

    Giacometti, Paolo; Diamond, Solomon G.

    Diffuse optical tomography (DOT) is a functional brain imaging technique that measures cerebral blood oxygenation and blood volume changes. This technique is particularly useful in human neuroimaging measurements because of the coupling between neural and hemodynamic activity in the brain. DOT is a multichannel imaging extension of near-infrared spectroscopy (NIRS). NIRS uses laser sources and light detectors on the scalp to obtain noninvasive hemodynamic measurements from spectroscopic analysis of the remitted light. This review explains how NIRS data analysis is performed using a combination of the modified Beer-Lambert law (MBLL) and the diffusion approximation to the radiative transport equation (RTE). Laser diodes, photodiode detectors, and optical terminals that contact the scalp are the main components in most NIRS systems. Placing multiple sources and detectors over the surface of the scalp allows for tomographic reconstructions that extend the individual measurements of NIRS into DOT. Mathematically arranging the DOT measurements into a linear system of equations that can be inverted provides a way to obtain tomographic reconstructions of hemodynamics in the brain.

  17. Electrochemical approach for acute myocardial infarction diagnosis based on direct antibodies-free analysis of human blood plasma.

    PubMed

    Suprun, Elena V; Saveliev, Anatoly A; Evtugyn, Gennady A; Lisitsa, Alexander V; Bulko, Tatiana V; Shumyantseva, Victoria V; Archakov, Alexander I

    2012-03-15

    A novel direct antibodies-free electrochemical approach for acute myocardial infarction (AMI) diagnosis has been developed. For this purpose, a combination of the electrochemical assay of plasma samples with chemometrics was proposed. Screen printed carbon electrodes modified with didodecyldimethylammonium bromide were used for plasma charactrerization by cyclic (CV) and square wave voltammetry and square wave (SWV) voltammetry. It was shown that the cathodic peak in voltammograms at about -250 mV vs. Ag/AgCl can be associated with AMI. In parallel tests, cardiac myoglobin and troponin I, the AMI biomarkers, were determined in each sample by RAMP immunoassay. The applicability of the electrochemical testing for AMI diagnostics was confirmed by statistical methods: generalized linear model (GLM), linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA), artificial neural net (multi-layer perception, MLP), and support vector machine (SVM), all of which were created to obtain the "True-False" distribution prediction where "True" and "False" are, respectively, positive and negative decision about an illness event. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Estimation of aboveground biomass in Mediterranean forests by statistical modelling of ASTER fraction images

    NASA Astrophysics Data System (ADS)

    Fernández-Manso, O.; Fernández-Manso, A.; Quintano, C.

    2014-09-01

    Aboveground biomass (AGB) estimation from optical satellite data is usually based on regression models of original or synthetic bands. To overcome the poor relation between AGB and spectral bands due to mixed-pixels when a medium spatial resolution sensor is considered, we propose to base the AGB estimation on fraction images from Linear Spectral Mixture Analysis (LSMA). Our study area is a managed Mediterranean pine woodland (Pinus pinaster Ait.) in central Spain. A total of 1033 circular field plots were used to estimate AGB from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) optical data. We applied Pearson correlation statistics and stepwise multiple regression to identify suitable predictors from the set of variables of original bands, fraction imagery, Normalized Difference Vegetation Index and Tasselled Cap components. Four linear models and one nonlinear model were tested. A linear combination of ASTER band 2 (red, 0.630-0.690 μm), band 8 (short wave infrared 5, 2.295-2.365 μm) and green vegetation fraction (from LSMA) was the best AGB predictor (Radj2=0.632, the root-mean-squared error of estimated AGB was 13.3 Mg ha-1 (or 37.7%), resulting from cross-validation), rather than other combinations of the above cited independent variables. Results indicated that using ASTER fraction images in regression models improves the AGB estimation in Mediterranean pine forests. The spatial distribution of the estimated AGB, based on a multiple linear regression model, may be used as baseline information for forest managers in future studies, such as quantifying the regional carbon budget, fuel accumulation or monitoring of management practices.

  19. Development of a computer technique for the prediction of transport aircraft flight profile sonic boom signatures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Coen, Peter G.

    1991-01-01

    A new computer technique for the analysis of transport aircraft sonic boom signature characteristics was developed. This new technique, based on linear theory methods, combines the previously separate equivalent area and F function development with a signature propagation method using a single geometry description. The new technique was implemented in a stand-alone computer program and was incorporated into an aircraft performance analysis program. Through these implementations, both configuration designers and performance analysts are given new capabilities to rapidly analyze an aircraft's sonic boom characteristics throughout the flight envelope.

  20. Probabilistic finite elements for fatigue and fracture analysis

    NASA Astrophysics Data System (ADS)

    Belytschko, Ted; Liu, Wing Kam

    Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.

  1. Efficient Computation Of Behavior Of Aircraft Tires

    NASA Technical Reports Server (NTRS)

    Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.

    1989-01-01

    NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.

  2. Probabilistic finite elements for fatigue and fracture analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Liu, Wing Kam

    1992-01-01

    Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.

  3. Combining large number of weak biomarkers based on AUC.

    PubMed

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Combining large number of weak biomarkers based on AUC

    PubMed Central

    Yan, Li; Tian, Lili; Liu, Song

    2018-01-01

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901

  5. Magnetic levitation configuration incorporating levitation, guidance and linear synchronous motor

    DOEpatents

    Coffey, H.T.

    1993-10-19

    A propulsion and suspension system for an inductive repulsion type magnetically levitated vehicle which is propelled and suspended by a system which includes propulsion windings which form a linear synchronous motor and conductive guideways, adjacent to the propulsion windings, where both combine to partially encircling the vehicle-borne superconducting magnets. A three phase power source is used with the linear synchronous motor to produce a traveling magnetic wave which in conjunction with the magnets propel the vehicle. The conductive guideway combines with the superconducting magnets to provide for vehicle levitation. 3 figures.

  6. Magnetic levitation configuration incorporating levitation, guidance and linear synchronous motor

    DOEpatents

    Coffey, Howard T.

    1993-01-01

    A propulsion and suspension system for an inductive repulsion type magnetically levitated vehicle which is propelled and suspended by a system which includes propulsion windings which form a linear synchronous motor and conductive guideways, adjacent to the propulsion windings, where both combine to partially encircling the vehicle-borne superconducting magnets. A three phase power source is used with the linear synchronous motor to produce a traveling magnetic wave which in conjunction with the magnets propel the vehicle. The conductive guideway combines with the superconducting magnets to provide for vehicle leviation.

  7. Shape component analysis: structure-preserving dimension reduction on biological shape spaces.

    PubMed

    Lee, Hao-Chih; Liao, Tao; Zhang, Yongjie Jessica; Yang, Ge

    2016-03-01

    Quantitative shape analysis is required by a wide range of biological studies across diverse scales, ranging from molecules to cells and organisms. In particular, high-throughput and systems-level studies of biological structures and functions have started to produce large volumes of complex high-dimensional shape data. Analysis and understanding of high-dimensional biological shape data require dimension-reduction techniques. We have developed a technique for non-linear dimension reduction of 2D and 3D biological shape representations on their Riemannian spaces. A key feature of this technique is that it preserves distances between different shapes in an embedded low-dimensional shape space. We demonstrate an application of this technique by combining it with non-linear mean-shift clustering on the Riemannian spaces for unsupervised clustering of shapes of cellular organelles and proteins. Source code and data for reproducing results of this article are freely available at https://github.com/ccdlcmu/shape_component_analysis_Matlab The implementation was made in MATLAB and supported on MS Windows, Linux and Mac OS. geyang@andrew.cmu.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Rapid Analysis of Carbohydrates in Bioprocess Samples: An Evaluation of the CarboPac SA10 for HPAE-PAD Analysis by Interlaboratory Comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sevcik, R. S.; Hyman, D. A.; Basumallich, L.

    2013-01-01

    A technique for carbohydrate analysis for bioprocess samples has been developed, providing reduced analysis time compared to current practice in the biofuels R&D community. The Thermofisher CarboPac SA10 anion-exchange column enables isocratic separation of monosaccharides, sucrose and cellobiose in approximately 7 minutes. Additionally, use of a low-volume (0.2 mL) injection valve in combination with a high-volume detection cell minimizes the extent of sample dilution required to bring sugar concentrations into the linear range of the pulsed amperometric detector (PAD). Three laboratories, representing academia, industry, and government, participated in an interlaboratory study which analyzed twenty-one opportunistic samples representing biomass pretreatment, enzymaticmore » saccharification, and fermentation samples. The technique's robustness, linearity, and interlaboratory reproducibility were evaluated and showed excellent-to-acceptable characteristics. Additionally, quantitation by the CarboPac SA10/PAD was compared with the current practice method utilizing a HPX-87P/RID. While these two methods showed good agreement a statistical comparison found significant quantitation difference between them, highlighting the difference between selective and universal detection modes.« less

  9. Somatotyping using 3D anthropometry: a cluster analysis.

    PubMed

    Olds, Tim; Daniell, Nathan; Petkov, John; David Stewart, Arthur

    2013-01-01

    Somatotyping is the quantification of human body shape, independent of body size. Hitherto, somatotyping (including the most popular method, the Heath-Carter system) has been based on subjective visual ratings, sometimes supported by surface anthropometry. This study used data derived from three-dimensional (3D) whole-body scans as inputs for cluster analysis to objectively derive clusters of similar body shapes. Twenty-nine dimensions normalised for body size were measured on a purposive sample of 301 adults aged 17-56 years who had been scanned using a Vitus Smart laser scanner. K-means Cluster Analysis with v-fold cross-validation was used to determine shape clusters. Three male and three female clusters emerged, and were visualised using those scans closest to the cluster centroid and a caricature defined by doubling the difference between the average scan and the cluster centroid. The male clusters were decidedly endomorphic (high fatness), ectomorphic (high linearity), and endo-mesomorphic (a mixture of fatness and muscularity). The female clusters were clearly endomorphic, ectomorphic, and the ecto-mesomorphic (a mixture of linearity and muscularity). An objective shape quantification procedure combining 3D scanning and cluster analysis yielded shape clusters strikingly similar to traditional somatotyping.

  10. Exhaustive Search for Sparse Variable Selection in Linear Regression

    NASA Astrophysics Data System (ADS)

    Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato

    2018-04-01

    We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.

  11. Low-energy nuclear reaction of the 14N+169Tm system: Incomplete fusion

    NASA Astrophysics Data System (ADS)

    Kumar, R.; Sharma, Vijay R.; Yadav, Abhishek; Singh, Pushpendra P.; Agarwal, Avinash; Appannababu, S.; Mukherjee, S.; Singh, B. P.; Ali, R.; Bhowmik, R. K.

    2017-11-01

    Excitation functions of reaction residues produced in the 14N+169Tm system have been measured to high precision at energies above the fusion barrier, ranging from 1.04 VB to 1.30 VB , and analyzed in the framework of the statistical model code pace4. Analysis of α -emitting channels points toward the onset of incomplete fusion even at slightly above-barrier energies where complete fusion is supposed to be one of the dominant processes. The onset and strength of incomplete fusion have been deduced and studied in terms of various entrance channel parameters. Present results together with the reanalysis of existing data for various projectile-target combinations conclusively suggest strong influence of projectile structure on the onset of incomplete fusion. Also, a strong dependence on the Coulomb effect (ZPZT) has been observed for the present system along with different projectile-target combinations available in the literature. It is concluded that the fraction of incomplete fusion linearly increases with ZPZT and is found to be more for larger ZPZT values, indicating significantly important linear systematics.

  12. Determination of the temperature distribution in a minichannel using ANSYS CFX and a procedure based on the Trefftz functions

    NASA Astrophysics Data System (ADS)

    Maciejewska, Beata; Błasiak, Sławomir; Piasecka, Magdalena

    This work discusses the mathematical model for laminar-flow heat transfer in a minichannel. The boundary conditions in the form of temperature distributions on the outer sides of the channel walls were determined from experimental data. The data were collected from the experimental stand the essential part of which is a vertical minichannel 1.7 mm deep, 16 mm wide and 180 mm long, asymmetrically heated by a Haynes-230 alloy plate. Infrared thermography allowed determining temperature changes on the outer side of the minichannel walls. The problem was analysed numerically through either ANSYS CFX software or special calculation procedures based on the Finite Element Method and Trefftz functions in the thermal boundary layer. The Trefftz functions were used to construct the basis functions. Solutions to the governing differential equations were approximated with a linear combination of Trefftz-type basis functions. Unknown coefficients of the linear combination were calculated by minimising the functional. The results of the comparative analysis were represented in a graphical form and discussed.

  13. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  14. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Cohesive zone finite element analysis of crack initiation from a butt joint’s interface corner

    DOE PAGES

    Reedy, E. D.

    2014-09-06

    The Cohesive zone (CZ) fracture analysis techniques are used to predict the initiation of crack growth from the interface corner of an adhesively bonded butt joint. In this plane strain analysis, a thin linear elastic adhesive layer is sandwiched between rigid adherends. There is no preexisting crack in the problem analyzed, and the focus is on how the shape of the traction–separation (T–U) relationship affects the predicted joint strength. Unlike the case of a preexisting interfacial crack, the calculated results clearly indicate that the predicted joint strength depends on the shape of the T–U relationship. Most of the calculations usedmore » a rectangular T–U relationship whose shape (aspect ratio) is defined by two parameters: the interfacial strength σ* and the work of separation/unit area Γ. The principal finding of this study is that for a specified adhesive layer thickness, there is any number of σ*, Γ combinations that generate the same predicted joint strength. For each combination there is a corresponding CZ length. We developed an approximate CZ-like elasticity solution to show how such combinations arise and their connection with the CZ length.« less

  16. Combined tension and bending testing of tapered composite laminates

    NASA Astrophysics Data System (ADS)

    O'Brien, T. Kevin; Murri, Gretchen B.; Hagemeier, Rick; Rogers, Charles

    1994-11-01

    A simple beam element used at Bell Helicopter was incorporated in the Computational Mechanics Testbed (COMET) finite element code at the Langley Research Center (LaRC) to analyze the responce of tappered laminates typical of flexbeams in composite rotor hubs. This beam element incorporated the influence of membrane loads on the flexural response of the tapered laminate configurations modeled and tested in a combined axial tension and bending (ATB) hydraulic load frame designed and built at LaRC. The moments generated from the finite element model were used in a tapered laminated plate theory analysis to estimate axial stresses on the surface of the tapered laminates due to combined bending and tension loads. Surfaces strains were calculated and compared to surface strains measured using strain gages mounted along the laminate length. The strain distributions correlated reasonably well with the analysis. The analysis was then used to examine the surface strain distribution in a non-linear tapered laminate where a similarly good correlation was obtained. Results indicate that simple finite element beam models may be used to identify tapered laminate configurations best suited for simulating the response of a composite flexbeam in a full scale rotor hub.

  17. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method

    NASA Astrophysics Data System (ADS)

    Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin

    2017-06-01

    Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.

  18. Forecast and analysis of the ratio of electric energy to terminal energy consumption for global energy internet

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Zhong, Ming; Cheng, Ling; Jin, Lu; Shen, Si

    2018-02-01

    In the background of building global energy internet, it has both theoretical and realistic significance for forecasting and analysing the ratio of electric energy to terminal energy consumption. This paper firstly analysed the influencing factors of the ratio of electric energy to terminal energy and then used combination method to forecast and analyse the global proportion of electric energy. And then, construct the cointegration model for the proportion of electric energy by using influence factor such as electricity price index, GDP, economic structure, energy use efficiency and total population level. At last, this paper got prediction map of the proportion of electric energy by using the combination-forecasting model based on multiple linear regression method, trend analysis method, and variance-covariance method. This map describes the development trend of the proportion of electric energy in 2017-2050 and the proportion of electric energy in 2050 was analysed in detail using scenario analysis.

  19. Non-invasive optical detection of HBV based on serum surface-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Zheng, Zuci; Wang, Qiwen; Weng, Cuncheng; Lin, Xueliang; Lin, Yao; Feng, Shangyuan

    2016-10-01

    An optical method of surface-enhanced Raman spectroscopy (SERS) was developed for non-invasive detection of hepatitis B surface virus (HBV). Hepatitis B virus surface antigen (HBsAg) is an established serological marker that is routinely used for the diagnosis of acute or chronic hepatitis B virus(HBV) infection. Utilizing SERS to analyze blood serum for detecting HBV has not been reported in previous literature. SERS measurements were performed on two groups of serum samples: one group for 50 HBV patients and the other group for 50 healthy volunteers. Blood serum samples are collected from healthy control subjects and patients diagnosed with HBV. Furthermore, principal components analysis (PCA) combined with linear discriminant analysis (LDA) were employed to differentiate HBV patients from healthy volunteer and achieved sensitivity of 80.0% and specificity of 74.0%. This exploratory work demonstrates that SERS serum analysis combined with PCA-LDA has tremendous potential for the non-invasive detection of HBV.

  20. Advanced composites structural concepts and materials technologies for primary aircraft structures: Structural response and failure analysis

    NASA Technical Reports Server (NTRS)

    Dorris, William J.; Hairr, John W.; Huang, Jui-Tien; Ingram, J. Edward; Shah, Bharat M.

    1992-01-01

    Non-linear analysis methods were adapted and incorporated in a finite element based DIAL code. These methods are necessary to evaluate the global response of a stiffened structure under combined in-plane and out-of-plane loading. These methods include the Arc Length method and target point analysis procedure. A new interface material model was implemented that can model elastic-plastic behavior of the bond adhesive. Direct application of this method is in skin/stiffener interface failure assessment. Addition of the AML (angle minus longitudinal or load) failure procedure and Hasin's failure criteria provides added capability in the failure predictions. Interactive Stiffened Panel Analysis modules were developed as interactive pre-and post-processors. Each module provides the means of performing self-initiated finite elements based analysis of primary structures such as a flat or curved stiffened panel; a corrugated flat sandwich panel; and a curved geodesic fuselage panel. This module brings finite element analysis into the design of composite structures without the requirement for the user to know much about the techniques and procedures needed to actually perform a finite element analysis from scratch. An interactive finite element code was developed to predict bolted joint strength considering material and geometrical non-linearity. The developed method conducts an ultimate strength failure analysis using a set of material degradation models.

  1. A Technique of Treating Negative Weights in WENO Schemes

    NASA Technical Reports Server (NTRS)

    Shi, Jing; Hu, Changqing; Shu, Chi-Wang

    2000-01-01

    High order accurate weighted essentially non-oscillatory (WENO) schemes have recently been developed for finite difference and finite volume methods both in structural and in unstructured meshes. A key idea in WENO scheme is a linear combination of lower order fluxes or reconstructions to obtain a high order approximation. The combination coefficients, also called linear weights, are determined by local geometry of the mesh and order of accuracy and may become negative. WENO procedures cannot be applied directly to obtain a stable scheme if negative linear weights are present. Previous strategy for handling this difficulty is by either regrouping of stencils or reducing the order of accuracy to get rid of the negative linear weights. In this paper we present a simple and effective technique for handling negative linear weights without a need to get rid of them.

  2. Every Mass or Mass Group When Created Will have No Motion, Linear, Rotational or Vibratory Motion, Singly or in Some Combination, Which May Be Later Modified by External Forces--A Natural Law

    NASA Astrophysics Data System (ADS)

    Brekke, Stewart

    2010-03-01

    Every mass or mass group, from atoms and molecules to stars and galaxies,has no motion, is vibrating, rotating,or moving linearly, singularly or in some combination. When created, the excess energy of creation will generate a vibration, rotation and/or linear motion besides the mass or mass group. Curvilinear or orbital motion is linear motion in an external force field. External forces, such as photon, molecular or stellar collisions may over time modify the inital rotational, vibratory or linear motions of the mass of mass group. The energy equation for each mass or mass group is E=mc^2 + 1/2mv^2 + 1/2I2̂+ 1/2kx0^2 + WG+ WE+ WM.

  3. Obtaining Global Picture From Single Point Observations by Combining Data Assimilation and Machine Learning Tools

    NASA Astrophysics Data System (ADS)

    Shprits, Y.; Zhelavskaya, I. S.; Kellerman, A. C.; Spasojevic, M.; Kondrashov, D. A.; Ghil, M.; Aseev, N.; Castillo Tibocha, A. M.; Cervantes Villa, J. S.; Kletzing, C.; Kurth, W. S.

    2017-12-01

    Increasing volume of satellite measurements requires deployment of new tools that can utilize such vast amount of data. Satellite measurements are usually limited to a single location in space, which complicates the data analysis geared towards reproducing the global state of the space environment. In this study we show how measurements can be combined by means of data assimilation and how machine learning can help analyze large amounts of data and can help develop global models that are trained on single point measurement. Data Assimilation: Manual analysis of the satellite measurements is a challenging task, while automated analysis is complicated by the fact that measurements are given at various locations in space, have different instrumental errors, and often vary by orders of magnitude. We show results of the long term reanalysis of radiation belt measurements along with fully operational real-time predictions using data assimilative VERB code. Machine Learning: We present application of the machine learning tools for the analysis of NASA Van Allen Probes upper-hybrid frequency measurements. Using the obtained data set we train a new global predictive neural network. The results for the Van Allen Probes based neural network are compared with historical IMAGE satellite observations. We also show examples of predictions of geomagnetic indices using neural networks. Combination of machine learning and data assimilation: We discuss how data assimilation tools and machine learning tools can be combine so that physics-based insight into the dynamics of the particular system can be combined with empirical knowledge of it's non-linear behavior.

  4. Elastic buckling analysis for composite stiffened panels and other structures subjected to biaxial inplane loads

    NASA Technical Reports Server (NTRS)

    Viswanathan, A. V.; Tamekuni, M.

    1973-01-01

    An exact linear analysis method is presented for predicting buckling of structures with arbitrary uniform cross section. The structure is idealized as an assemblage of laminated plate-strip elements, curved and planar, and beam elements. Element edges normal to the longitudinal axes are assumed to be simply supported. Arbitrary boundary conditions may be specified on any external longitudinal edge of plate-strip elements. The structure or selected elements may be loaded in any desired combination of inplane transverse compression or tension side load and axial compression load. The analysis simultaneously considers all possible modes of instability and is applicable for the buckling of laminated composite structures. Numerical results correlate well with the results of previous analysis methods.

  5. Symbolic-numeric interface: A review

    NASA Technical Reports Server (NTRS)

    Ng, E. W.

    1980-01-01

    A survey of the use of a combination of symbolic and numerical calculations is presented. Symbolic calculations primarily refer to the computer processing of procedures from classical algebra, analysis, and calculus. Numerical calculations refer to both numerical mathematics research and scientific computation. This survey is intended to point out a large number of problem areas where a cooperation of symbolic and numerical methods is likely to bear many fruits. These areas include such classical operations as differentiation and integration, such diverse activities as function approximations and qualitative analysis, and such contemporary topics as finite element calculations and computation complexity. It is contended that other less obvious topics such as the fast Fourier transform, linear algebra, nonlinear analysis and error analysis would also benefit from a synergistic approach.

  6. Multi-disease analysis of maternal antibody decay using non-linear mixed models accounting for censoring.

    PubMed

    Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel

    2015-09-10

    Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.

  7. The design and analysis of simple low speed flap systems with the aid of linearized theory computer programs

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.

    1985-01-01

    The purpose here is to show how two linearized theory computer programs in combination may be used for the design of low speed wing flap systems capable of high levels of aerodynamic efficiency. A fundamental premise of the study is that high levels of aerodynamic performance for flap systems can be achieved only if the flow about the wing remains predominantly attached. Based on this premise, a wing design program is used to provide idealized attached flow camber surfaces from which candidate flap systems may be derived, and, in a following step, a wing evaluation program is used to provide estimates of the aerodynamic performance of the candidate systems. Design strategies and techniques that may be employed are illustrated through a series of examples. Applicability of the numerical methods to the analysis of a representative flap system (although not a system designed by the process described here) is demonstrated in a comparison with experimental data.

  8. Tracking performance under time sharing conditions with a digit processing task: A feedback control theory analysis. [attention sharing effect on operator performance

    NASA Technical Reports Server (NTRS)

    Gopher, D.; Wickens, C. D.

    1975-01-01

    A one dimensional compensatory tracking task and a digit processing reaction time task were combined in a three phase experiment designed to investigate tracking performance in time sharing. Adaptive techniques, elaborate feedback devices, and on line standardization procedures were used to adjust task difficulty to the ability of each individual subject and manipulate time sharing demands. Feedback control analysis techniques were employed in the description of tracking performance. The experimental results show that when the dynamics of a system are constrained, in such a manner that man machine system stability is no longer a major concern of the operator, he tends to adopt a first order control describing function, even with tracking systems of higher order. Attention diversion to a concurrent task leads to an increase in remnant level, or nonlinear power. This decrease in linearity is reflected both in the output magnitude spectra of the subjects, and in the linear fit of the amplitude ratio functions.

  9. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    PubMed

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  10. Response of Non-Linear Shock Absorbers-Boundary Value Problem Analysis

    NASA Astrophysics Data System (ADS)

    Rahman, M. A.; Ahmed, U.; Uddin, M. S.

    2013-08-01

    A nonlinear boundary value problem of two degrees-of-freedom (DOF) untuned vibration damper systems using nonlinear springs and dampers has been numerically studied. As far as untuned damper is concerned, sixteen different combinations of linear and nonlinear springs and dampers have been comprehensively analyzed taking into account transient terms. For different cases, a comparative study is made for response versus time for different spring and damper types at three important frequency ratios: one at r = 1, one at r > 1 and one at r <1. The response of the system is changed because of the spring and damper nonlinearities; the change is different for different cases. Accordingly, an initially stable absorber may become unstable with time and vice versa. The analysis also shows that higher nonlinearity terms make the system more unstable. Numerical simulation includes transient vibrations. Although problems are much more complicated compared to those for a tuned absorber, a comparison of the results generated by the present numerical scheme with the exact one shows quite a reasonable agreement

  11. Linear stability analysis of particle-laden hypopycnal plumes

    NASA Astrophysics Data System (ADS)

    Farenzena, Bruno Avila; Silvestrini, Jorge Hugo

    2017-12-01

    Gravity-driven riverine outflows are responsible for carrying sediments to the coastal waters. The turbulent mixing in these flows is associated with shear and gravitational instabilities such as Kelvin-Helmholtz, Holmboe, and Rayleigh-Taylor. Results from temporal linear stability analysis of a two-layer stratified flow are presented, investigating the behavior of settling particles and mixing region thickness on the flow stability in the presence of ambient shear. The particles are considered suspended in the transport fluid, and its sedimentation is modeled with a constant valued settling velocity. Three scenarios, regarding the mixing region thickness, were identified: the poorly mixed environment, the strong mixed environment, and intermediate scenario. It was observed that Kelvin-Helmholtz and settling convection modes are the two fastest growing modes depending on the particles settling velocity and the total Richardson number. The second scenario presents a modified Rayleigh-Taylor instability, which is the dominant mode. The third case can have Kelvin-Helmholtz, settling convection, and modified Rayleigh-Taylor modes as the fastest growing mode depending on the combination of parameters.

  12. Classification of the Correct Quranic Letters Pronunciation of Male and Female Reciters

    NASA Astrophysics Data System (ADS)

    Khairuddin, Safiah; Ahmad, Salmiah; Embong, Abdul Halim; Nur Wahidah Nik Hashim, Nik; Altamas, Tareq M. K.; Nuratikah Syd Badaruddin, Syarifah; Shahbudin Hassan, Surul

    2017-11-01

    Recitation of the Holy Quran with the correct Tajweed is essential for every Muslim. Islam has encouraged Quranic education since early age as the recitation of the Quran correctly will represent the correct meaning of the words of Allah. It is important to recite the Quranic verses according to its characteristics (sifaat) and from its point of articulations (makhraj). This paper presents the identification and classification analysis of Quranic letters pronunciation for both male and female reciters, to obtain the unique representation of each letter by male as compared to female expert reciters. Linear Discriminant Analysis (LDA) was used as the classifier to classify the data with Formants and Power Spectral Density (PSD) as the acoustic features. The result shows that linear classifier of PSD with band 1 and band 2 power spectral combinations gives a high percentage of classification accuracy for most of the Quranic letters. It is also shown that the pronunciation by male reciters gives better result in the classification of the Quranic letters.

  13. Analysis of Instabilities in Non-Axisymmetric Hypersonic Boundary Layers Over Cones

    NASA Technical Reports Server (NTRS)

    Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan; White, Jeffery A.

    2010-01-01

    Hypersonic flows over circular cones constitute one of the most important generic configurations for fundamental aerodynamic and aerothermodynamic studies. In this paper, numerical computations are carried out for Mach 6 flows over a 7-degree half-angle cone with two different flow incidence angles and a compression cone with a large concave curvature. Instability wave and transition-related flow physics are investigated using a series of advanced stability methods ranging from conventional linear stability theory (LST) and a higher-fidelity linear and nonlinear parabolized stability equations (PSE), to the 2D eigenvalue analysis based on partial differential equations. Computed N factor distribution pertinent to various instability mechanisms over the cone surface provides initial assessments of possible transition fronts and a guide to corresponding disturbance characteristics such as frequency and azimuthal wave numbers. It is also shown that strong secondary instability that eventually leads to transition to turbulence can be simulated very efficiently using a combination of advanced stability methods described above.

  14. Prediction Analysis for Measles Epidemics

    NASA Astrophysics Data System (ADS)

    Sumi, Ayako; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi; Olsen, Lars Folke; Kobayashi, Nobumichi

    2003-12-01

    A newly devised procedure of prediction analysis, which is a linearized version of the nonlinear least squares method combined with the maximum entropy spectral analysis method, was proposed. This method was applied to time series data of measles case notification in several communities in the UK, USA and Denmark. The dominant spectral lines observed in each power spectral density (PSD) can be safely assigned as fundamental periods. The optimum least squares fitting (LSF) curve calculated using these fundamental periods can essentially reproduce the underlying variation of the measles data. An extension of the LSF curve can be used to predict measles case notification quantitatively. Some discussions including a predictability of chaotic time series are presented.

  15. The STAGS computer code

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Brogan, F. A.

    1978-01-01

    Basic information about the computer code STAGS (Structural Analysis of General Shells) is presented to describe to potential users the scope of the code and the solution procedures that are incorporated. Primarily, STAGS is intended for analysis of shell structures, although it has been extended to more complex shell configurations through the inclusion of springs and beam elements. The formulation is based on a variational approach in combination with local two dimensional power series representations of the displacement components. The computer code includes options for analysis of linear or nonlinear static stress, stability, vibrations, and transient response. Material as well as geometric nonlinearities are included. A few examples of applications of the code are presented for further illustration of its scope.

  16. STAR FORMATION ON SUBKILOPARSEC SCALE TRIGGERED BY NON-LINEAR PROCESSES IN NEARBY SPIRAL GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Momose, Rieko; Koda, Jin; Donovan Meyer, Jennifer

    We report a super-linear correlation for the star formation law based on new CO(J = 1-0) data from the CARMA and NOBEYAMA Nearby-galaxies (CANON) CO survey. The sample includes 10 nearby spiral galaxies, in which structures at sub-kpc scales are spatially resolved. Combined with the star formation rate surface density traced by H{alpha} and 24 {mu}m images, CO(J = 1-0) data provide a super-linear slope of N = 1.3. The slope becomes even steeper (N = 1.8) when the diffuse stellar and dust background emission is subtracted from the H{alpha} and 24 {mu}m images. In contrast to the recent resultsmore » with CO(J = 2-1) that found a constant star formation efficiency (SFE) in many spiral galaxies, these results suggest that the SFE is not independent of environment, but increases with molecular gas surface density. We suggest that the excitation of CO(J = 2-1) is likely enhanced in the regions with higher star formation and does not linearly trace the molecular gas mass. In addition, the diffuse emission contaminates the SFE measurement most in regions where the star formation rate is law. These two effects can flatten the power-law correlation and produce the apparent linear slope. The super-linear slope from the CO(J = 1-0) analysis indicates that star formation is enhanced by non-linear processes in regions of high gas density, e.g., gravitational collapse and cloud-cloud collisions.« less

  17. Information analysis of posterior canal afferents in the turtle, Trachemys scripta elegans.

    PubMed

    Rowe, Michael H; Neiman, Alexander B

    2012-01-24

    We have used sinusoidal and band-limited Gaussian noise stimuli along with information measures to characterize the linear and non-linear responses of morpho-physiologically identified posterior canal (PC) afferents and to examine the relationship between mutual information rate and other physiological parameters. Our major findings are: 1) spike generation in most PC afferents is effectively a stochastic renewal process, and spontaneous discharges are fully characterized by their first order statistics; 2) a regular discharge, as measured by normalized coefficient of variation (cv*), reduces intrinsic noise in afferent discharges at frequencies below the mean firing rate; 3) coherence and mutual information rates, calculated from responses to band-limited Gaussian noise, are jointly determined by gain and intrinsic noise (discharge regularity), the two major determinants of signal to noise ratio in the afferent response; 4) measures of optimal non-linear encoding were only moderately greater than optimal linear encoding, indicating that linear stimulus encoding is limited primarily by internal noise rather than by non-linearities; and 5) a leaky integrate and fire model reproduces these results and supports the suggestion that the combination of high discharge regularity and high discharge rates serves to extend the linear encoding range of afferents to higher frequencies. These results provide a framework for future assessments of afferent encoding of signals generated during natural head movements and for comparison with coding strategies used by other sensory systems. This article is part of a Special Issue entitled: Neural Coding. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Dynamical characteristics of surface EMG signals of hand grasps via recurrence plot.

    PubMed

    Ouyang, Gaoxiang; Zhu, Xiangyang; Ju, Zhaojie; Liu, Honghai

    2014-01-01

    Recognizing human hand grasp movements through surface electromyogram (sEMG) is a challenging task. In this paper, we investigated nonlinear measures based on recurrence plot, as a tool to evaluate the hidden dynamical characteristics of sEMG during four different hand movements. A series of experimental tests in this study show that the dynamical characteristics of sEMG data with recurrence quantification analysis (RQA) can distinguish different hand grasp movements. Meanwhile, adaptive neuro-fuzzy inference system (ANFIS) is applied to evaluate the performance of the aforementioned measures to identify the grasp movements. The experimental results show that the recognition rate (99.1%) based on the combination of linear and nonlinear measures is much higher than those with only linear measures (93.4%) or nonlinear measures (88.1%). These results suggest that the RQA measures might be a potential tool to reveal the sEMG hidden characteristics of hand grasp movements and an effective supplement for the traditional linear grasp recognition methods.

  19. Clinical Relevance of Autoantibodies in Patients with Autoimmune Bullous Dermatosis

    PubMed Central

    Mihályi, Lilla; Kiss, Mária; Dobozy, Attila; Kemény, Lajos; Husz, Sándor

    2012-01-01

    The authors present their experience related to the diagnosis, treatment, and followup of 431 patients with bullous pemphigoid, 14 patients with juvenile bullous pemphigoid, and 273 patients with pemphigus. The detection of autoantibodies plays an outstanding role in the diagnosis and differential diagnosis. Paraneoplastic pemphigoid is suggested to be a distinct entity from the group of bullous pemphigoid in view of the linear C3 deposits along the basement membrane of the perilesional skin and the “ladder” configuration of autoantibodies demonstrated by western blot analysis. It is proposed that IgA pemphigoid should be differentiated from the linear IgA dermatoses. Immunosuppressive therapy is recommended in which the maintenance dose of corticosteroid is administered every second day, thereby reducing the side effects of the corticosteroids. Following the detection of IgA antibodies (IgA pemphigoid, linear IgA bullous dermatosis, and IgA pemphigus), diamino diphenyl sulfone (dapsone) therapy is preferred alone or in combination. The clinical relevance of autoantibodies in patients with autoimmune bullous dermatosis is stressed. PMID:23320017

  20. Saliency detection algorithm based on LSC-RC

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Tian, Weiye; Wang, Ding; Luo, Xin; Wu, Yingfei; Zhang, Yu

    2018-02-01

    Image prominence is the most important region in an image, which can cause the visual attention and response of human beings. Preferentially allocating the computer resources for the image analysis and synthesis by the significant region is of great significance to improve the image area detecting. As a preprocessing of other disciplines in image processing field, the image prominence has widely applications in image retrieval and image segmentation. Among these applications, the super-pixel segmentation significance detection algorithm based on linear spectral clustering (LSC) has achieved good results. The significance detection algorithm proposed in this paper is better than the regional contrast ratio by replacing the method of regional formation in the latter with the linear spectral clustering image is super-pixel block. After combining with the latest depth learning method, the accuracy of the significant region detecting has a great promotion. At last, the superiority and feasibility of the super-pixel segmentation detection algorithm based on linear spectral clustering are proved by the comparative test.

  1. A single-phase axially-magnetized permanent-magnet oscillating machine for miniature aerospace power sources

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Zheng, Ping; Cheng, Luming; Wang, Weinan; Liu, Jiaqi

    2017-05-01

    A single-phase axially-magnetized permanent-magnet (PM) oscillating machine which can be integrated with a free-piston Stirling engine to generate electric power, is investigated for miniature aerospace power sources. Machine structure, operating principle and detent force characteristic are elaborately studied. With the sinusoidal speed characteristic of the mover considered, the proposed machine is designed by 2D finite-element analysis (FEA), and some main structural parameters such as air gap diameter, dimensions of PMs, pole pitches of both stator and mover, and the pole-pitch combinations, etc., are optimized to improve both the power density and force capability. Compared with the three-phase PM linear machines, the proposed single-phase machine features less PM use, simple control and low controller cost. The power density of the proposed machine is higher than that of the three-phase radially-magnetized PM linear machine, but lower than the three-phase axially-magnetized PM linear machine.

  2. Runoff load estimation of particulate and dissolved nitrogen in Lake Inba watershed using continuous monitoring data on turbidity and electric conductivity.

    PubMed

    Kim, J; Nagano, Y; Furumai, H

    2012-01-01

    Easy-to-measure surrogate parameters for water quality indicators are needed for real time monitoring as well as for generating data for model calibration and validation. In this study, a novel linear regression model for estimating total nitrogen (TN) based on two surrogate parameters is proposed based on evaluation of pollutant loads flowing into a eutrophic lake. Based on their runoff characteristics during wet weather, electric conductivity (EC) and turbidity were selected as surrogates for particulate nitrogen (PN) and dissolved nitrogen (DN), respectively. Strong linear relationships were established between PN and turbidity and DN and EC, and both models subsequently combined for estimation of TN. This model was evaluated by comparison of estimated and observed TN runoff loads during rainfall events. This analysis showed that turbidity and EC are viable surrogates for PN and DN, respectively, and that the linear regression model for TN concentration was successful in estimating TN runoff loads during rainfall events and also under dry weather conditions.

  3. Optical Measurement of Radiocarbon below Unity Fraction Modern by Linear Absorption Spectroscopy.

    PubMed

    Fleisher, Adam J; Long, David A; Liu, Qingnan; Gameson, Lyn; Hodges, Joseph T

    2017-09-21

    High-precision measurements of radiocarbon ( 14 C) near or below a fraction modern 14 C of 1 (F 14 C ≤ 1) are challenging and costly. An accurate, ultrasensitive linear absorption approach to detecting 14 C would provide a simple and robust benchtop alternative to off-site accelerator mass spectrometry facilities. Here we report the quantitative measurement of 14 C in gas-phase samples of CO 2 with F 14 C < 1 using cavity ring-down spectroscopy in the linear absorption regime. Repeated analysis of CO 2 derived from the combustion of either biogenic or petrogenic sources revealed a robust ability to differentiate samples with F 14 C < 1. With a combined uncertainty of 14 C/ 12 C = 130 fmol/mol (F 14 C = 0.11), initial performance of the calibration-free instrument is sufficient to investigate a variety of applications in radiocarbon measurement science including the study of biofuels and bioplastics, illicitly traded specimens, bomb dating, and atmospheric transport.

  4. HgCdTe APD-based linear-mode photon counting components and ladar receivers

    NASA Astrophysics Data System (ADS)

    Jack, Michael; Wehner, Justin; Edwards, John; Chapman, George; Hall, Donald N. B.; Jacobson, Shane M.

    2011-05-01

    Linear mode photon counting (LMPC) provides significant advantages in comparison with Geiger Mode (GM) Photon Counting including absence of after-pulsing, nanosecond pulse to pulse temporal resolution and robust operation in the present of high density obscurants or variable reflectivity objects. For this reason Raytheon has developed and previously reported on unique linear mode photon counting components and modules based on combining advanced APDs and advanced high gain circuits. By using HgCdTe APDs we enable Poisson number preserving photon counting. A metric of photon counting technology is dark count rate and detection probability. In this paper we report on a performance breakthrough resulting from improvement in design, process and readout operation enabling >10x reduction in dark counts rate to ~10,000 cps and >104x reduction in surface dark current enabling long 10 ms integration times. Our analysis of key dark current contributors suggest that substantial further reduction in DCR to ~ 1/sec or less can be achieved by optimizing wavelength, operating voltage and temperature.

  5. Recent Developments in the Analysis of Couple Oscillator Arrays

    NASA Technical Reports Server (NTRS)

    Pogorzelski, Ronald J.

    2000-01-01

    This presentation considers linear arrays of coupled oscillators. Our purpose in coupling oscillators together is to achieve high radiated power through the spatial power combining which results when the oscillators are injection locked to each other. York, et. al. have shown that, left to themselves, the ensemble of injection locked oscillators oscillate at the average of the tuning frequencies of all the oscillators. Coupling these arrays achieves high radiated power through coherent spatial power combining. The coupled oscillators are usually designed to produce constant aperture phase. Oscillators are injection locked to each other or to a master oscillator to produce coherent radiation. Oscillators do not necessarily oscillate at their tuning frequency.

  6. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1982-01-01

    The feasibility of modeling magnetic fields due to certain electrical currents flowing in the Earth's ionosphere and magnetosphere was investigated. A method was devised to carry out forward modeling of the magnetic perturbations that arise from space currents. The procedure utilizes a linear current element representation of the distributed electrical currents. The finite thickness elements are combined into loops which are in turn combined into cells having their base in the ionosphere. In addition to the extensive field modeling, additional software was developed for the reduction and analysis of the MAGSAT data in terms of the external current effects. Direct comparisons between the models and the MAGSAT data are possible.

  7. Exploring multicriteria decision strategies in GIS with linguistic quantifiers: A case study of residential quality evaluation

    NASA Astrophysics Data System (ADS)

    Malczewski, Jacek; Rinner, Claus

    2005-06-01

    Commonly used GIS combination operators such as Boolean conjunction/disjunction and weighted linear combination can be generalized to the ordered weighted averaging (OWA) family of operators. This multicriteria evaluation method allows decision-makers to define a decision strategy on a continuum between pessimistic and optimistic strategies. Recently, OWA has been introduced to GIS-based decision support systems. We propose to extend a previous implementation of OWA with linguistic quantifiers to simplify the definition of decision strategies and to facilitate an exploratory analysis of multiple criteria. The linguistic quantifier-guided OWA procedure is illustrated using a dataset for evaluating residential quality of neighborhoods in London, Ontario.

  8. Prediction of B-cell linear epitopes with a combination of support vector machine classification and amino acid propensity identification.

    PubMed

    Wang, Hsin-Wei; Lin, Ya-Chi; Pai, Tun-Wen; Chang, Hao-Teng

    2011-01-01

    Epitopes are antigenic determinants that are useful because they induce B-cell antibody production and stimulate T-cell activation. Bioinformatics can enable rapid, efficient prediction of potential epitopes. Here, we designed a novel B-cell linear epitope prediction system called LEPS, Linear Epitope Prediction by Propensities and Support Vector Machine, that combined physico-chemical propensity identification and support vector machine (SVM) classification. We tested the LEPS on four datasets: AntiJen, HIV, a newly generated PC, and AHP, a combination of these three datasets. Peptides with globally or locally high physicochemical propensities were first identified as primitive linear epitope (LE) candidates. Then, candidates were classified with the SVM based on the unique features of amino acid segments. This reduced the number of predicted epitopes and enhanced the positive prediction value (PPV). Compared to four other well-known LE prediction systems, the LEPS achieved the highest accuracy (72.52%), specificity (84.22%), PPV (32.07%), and Matthews' correlation coefficient (10.36%).

  9. Recent applications of multivariate data analysis methods in the authentication of rice and the most analyzed parameters: A review.

    PubMed

    Maione, Camila; Barbosa, Rommel Melgaço

    2018-01-24

    Rice is one of the most important staple foods around the world. Authentication of rice is one of the most addressed concerns in the present literature, which includes recognition of its geographical origin and variety, certification of organic rice and many other issues. Good results have been achieved by multivariate data analysis and data mining techniques when combined with specific parameters for ascertaining authenticity and many other useful characteristics of rice, such as quality, yield and others. This paper brings a review of the recent research projects on discrimination and authentication of rice using multivariate data analysis and data mining techniques. We found that data obtained from image processing, molecular and atomic spectroscopy, elemental fingerprinting, genetic markers, molecular content and others are promising sources of information regarding geographical origin, variety and other aspects of rice, being widely used combined with multivariate data analysis techniques. Principal component analysis and linear discriminant analysis are the preferred methods, but several other data classification techniques such as support vector machines, artificial neural networks and others are also frequently present in some studies and show high performance for discrimination of rice.

  10. Simultaneous Determination of Eight Hypotensive Drugs of Various Chemical Groups in Pharmaceutical Preparations by HPLC-DAD.

    PubMed

    Stolarczyk, Mariusz; Hubicka, Urszula; Żuromska-Witek, Barbara; Krzek, Jan

    2015-01-01

    A new sensitive, simple, rapid, and precise HPLC method with diode array detection has been developed for separation and simultaneous determination of hydrochlorothiazide, furosemide, torasemide, losartane, quinapril, valsartan, spironolactone, and canrenone in combined pharmaceutical dosage forms. The chromatographic analysis of the tested drugs was performed on an ACE C18, 100 Å, 250×4.6 mm, 5 μm particle size column with 0.0.05 M phosphate buffer (pH=3.00)-acetonitrile-methanol (30+20+50 v/v/v) mobile phase at a flow rate of 1.0 mL/min. The column was thermostatted at 25°C. UV detection was performed at 230 nm. Analysis time was 10 min. The elaborated method meets the acceptance criteria for specificity, linearity, sensitivity, accuracy, and precision. The proposed method was successfully applied for the determination of the studied drugs in the selected combined dosage forms.

  11. Classification of smoke tainted wines using mid-infrared spectroscopy and chemometrics.

    PubMed

    Fudge, Anthea L; Wilkinson, Kerry L; Ristic, Renata; Cozzolino, Daniel

    2012-01-11

    In this study, the suitability of mid-infrared (MIR) spectroscopy, combined with principal component analysis (PCA) and linear discriminant analysis (LDA), was evaluated as a rapid analytical technique to identify smoke tainted wines. Control (i.e., unsmoked) and smoke-affected wines (260 in total) from experimental and commercial sources were analyzed by MIR spectroscopy and chemometrics. The concentrations of guaiacol and 4-methylguaiacol were also determined using gas chromatography-mass spectrometry (GC-MS), as markers of smoke taint. LDA models correctly classified 61% of control wines and 70% of smoke-affected wines. Classification rates were found to be influenced by the extent of smoke taint (based on GC-MS and informal sensory assessment), as well as qualitative differences in wine composition due to grape variety and oak maturation. Overall, the potential application of MIR spectroscopy combined with chemometrics as a rapid analytical technique for screening smoke-affected wines was demonstrated.

  12. A new approach in space-time analysis of multivariate hydrological data: Application to Brazil's Nordeste region rainfall

    NASA Astrophysics Data System (ADS)

    Sicard, Emeline; Sabatier, Robert; Niel, HéLèNe; Cadier, Eric

    2002-12-01

    The objective of this paper is to implement an original method for spatial and multivariate data, combining a method of three-way array analysis (STATIS) with geostatistical tools. The variables of interest are the monthly amounts of rainfall in the Nordeste region of Brazil, recorded from 1937 to 1975. The principle of the technique is the calculation of a linear combination of the initial variables, containing a large part of the initial variability and taking into account the spatial dependencies. It is a promising method that is able to analyze triple variability: spatial, seasonal, and interannual. In our case, the first component obtained discriminates a group of rain gauges, corresponding approximately to the Agreste, from all the others. The monthly variables of July and August strongly influence this separation. Furthermore, an annual study brings out the stability of the spatial structure of components calculated for each year.

  13. Comprehensive Chemical Fingerprinting of High-Quality Cocoa at Early Stages of Processing: Effectiveness of Combined Untargeted and Targeted Approaches for Classification and Discrimination.

    PubMed

    Magagna, Federico; Guglielmetti, Alessandro; Liberto, Erica; Reichenbach, Stephen E; Allegrucci, Elena; Gobino, Guido; Bicchi, Carlo; Cordero, Chiara

    2017-08-02

    This study investigates chemical information of volatile fractions of high-quality cocoa (Theobroma cacao L. Malvaceae) from different origins (Mexico, Ecuador, Venezuela, Columbia, Java, Trinidad, and Sao Tomè) produced for fine chocolate. This study explores the evolution of the entire pattern of volatiles in relation to cocoa processing (raw, roasted, steamed, and ground beans). Advanced chemical fingerprinting (e.g., combined untargeted and targeted fingerprinting) with comprehensive two-dimensional gas chromatography coupled with mass spectrometry allows advanced pattern recognition for classification, discrimination, and sensory-quality characterization. The entire data set is analyzed for 595 reliable two-dimensional peak regions, including 130 known analytes and 13 potent odorants. Multivariate analysis with unsupervised exploration (principal component analysis) and simple supervised discrimination methods (Fisher ratios and linear regression trees) reveal informative patterns of similarities and differences and identify characteristic compounds related to sample origin and manufacturing step.

  14. Reproducibility of EEG-fMRI results in a patient with fixation-off sensitivity.

    PubMed

    Formaggio, Emanuela; Storti, Silvia Francesca; Galazzo, Ilaria Boscolo; Bongiovanni, Luigi Giuseppe; Cerini, Roberto; Fiaschi, Antonio; Manganotti, Paolo

    2014-07-01

    Blood oxygenation level-dependent (BOLD) activation associated with interictal epileptiform discharges in a patient with fixation-off sensitivity (FOS) was studied using a combined electroencephalography-functional magnetic resonance imaging (EEG-fMRI) technique. An automatic approach for combined EEG-fMRI analysis and a subject-specific hemodynamic response function was used to improve general linear model analysis of the fMRI data. The EEG showed the typical features of FOS, with continuous epileptiform discharges during elimination of central vision by eye opening and closing and fixation; modification of this pattern was clearly visible and recognizable. During all 3 recording sessions EEG-fMRI activations indicated a BOLD signal decrease related to epileptiform activity in the parietal areas. This study can further our understanding of this EEG phenomenon and can provide some insight into the reliability of the EEG-fMRI technique in localizing the irritative zone.

  15. Distinguishing fiction from non-fiction with complex networks

    NASA Astrophysics Data System (ADS)

    Larue, David M.; Carr, Lincoln D.; Jones, Linnea K.; Stevanak, Joe T.

    2014-03-01

    Complex Network Measures are applied to networks constructed from texts in English to demonstrate an initial viability in textual analysis. Texts from novels and short stories obtained from Project Gutenberg and news stories obtained from NPR are selected. Unique word stems in a text are used as nodes in an associated unweighted undirected network, with edges connecting words occurring within a certain number of words somewhere in the text. Various combinations of complex network measures are computed for each text's network. Fisher's Linear Discriminant analysis is used to build a parameter optimizing the ability to separate the texts according to their genre. Success rates in the 70% range for correctly distinguishing fiction from non-fiction were obtained using edges defined as within four words, using 400 word samples from 400 texts from each of the two genres with some combinations of measures such as the power-law exponents of degree distributions and clustering coefficients.

  16. The design and experiment of a novel ultrasonic motor based on the combination of bending modes.

    PubMed

    Yan, Jipeng; Liu, Yingxiang; Liu, Junkao; Xu, Dongmei; Chen, Weishan

    2016-09-01

    This paper presents a new-type linear ultrasonic motor which takes advantage of the combination of two orthogonal bending vibration modes. The proposed ultrasonic motor consists of eight pieces of PZT ceramic plates and a metal beam that includes two cone-shaped horns and a cylindrical driving foot. The finite element analyses were finished to verify the working principle of the proposed motor. The mode shapes of the motor were obtained by modal analysis; the elliptical trajectories of nodes on the driving foot were obtained by time-domain analysis. Based on the analyses, a prototype of the proposed motor was fabricated and measured. The mechanical output characteristics were obtained by experiments. The maximal velocity of the proposed motor is 735mm/s and the maximal thrust is 1.1N. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. A note on the effects of viscosity on the stability of a trailing-line vortex

    NASA Technical Reports Server (NTRS)

    Duck, Peter W.; Khorrami, Mehdi R.

    1992-01-01

    The linear stability of the Batchelor (1964) vortex is examined with emphasis on new viscous modes recently found numerically by Khorrami (1991). Unlike the previously reported inviscid modes of instability, these modes are destabilized by viscosity and exhibit small growth rates at large Reynolds numbers. The analysis presented here uses a combination of asymptotic and numerical techniques. The results confirm the existence of the additional modes of instability due to viscosity.

  18. Lenslet array processors.

    PubMed

    Glaser, I

    1982-04-01

    By combining a lenslet array with masks it is possible to obtain a noncoherent optical processor capable of computing in parallel generalized 2-D discrete linear transformations. We present here an analysis of such lenslet array processors (LAP). The effect of several errors, including optical aberrations, diffraction, vignetting, and geometrical and mask errors, are calculated, and guidelines to optical design of LAP are derived. Using these results, both ultimate and practical performances of LAP are compared with those of competing techniques.

  19. The cost of colorectal cancer according to the TNM stage.

    PubMed

    Mar, Javier; Errasti, Jose; Soto-Gordoa, Myriam; Mar-Barrutia, Gilen; Martinez-Llorente, José Miguel; Domínguez, Severina; García-Albás, Juan José; Arrospide, Arantzazu

    2017-02-01

    The aim of this study was to measure the cost of treatment of colorectal cancer in the Basque public health system according to the clinical stage. We retrospectively collected demographic data, clinical data and resource use of a sample of 529 patients. For stagesi toiii the initial and follow-up costs were measured. The calculation of cost for stageiv combined generalized linear models to relate the cost to the duration of follow-up based on parametric survival analysis. Unit costs were obtained from the analytical accounting system of the Basque Health Service. The sample included 110 patients with stagei, 171 with stageii, 158 with stageiii and 90 with stageiv colorectal cancer. The initial total cost per patient was 8,644€ for stagei, 12,675€ for stageii and 13,034€ for stageiii. The main component was hospitalization cost. Calculated by extrapolation for stageiv mean survival was 1.27years. Its average annual cost was 22,403€, and 24,509€ to death. The total annual cost for colorectal cancer extrapolated to the whole Spanish health system was 623.9million€. The economic burden of colorectal cancer is important and should be taken into account in decision-making. The combination of generalized linear models and survival analysis allows estimation of the cost of metastatic stage. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. The response of multidegree-of-freedom systems with quadratic non-linearities to a harmonic parametric resonance

    NASA Astrophysics Data System (ADS)

    Nayfeh, A. H.

    1983-09-01

    An analysis is presented of the response of multidegree-of-freedom systems with quadratic non-linearities to a harmonic parametric excitation in the presence of an internal resonance of the combination type ω3 ≈ ω2 + ω1, where the ωn are the linear natural frequencies of the systems. In the case of a fundamental resonance of the third mode (i.e., Ω ≈ω 3, where Ω is the frequency of the excitation), one can identify two critical values ζ 1 and ζ 2, where ζ 2 ⩾ ζ 1, of the amplitude F of the excitation. The value F = ζ2 corresponds to the transition from stable to unstable solutions. When F < ζ1, the motion decays to zero according to both linear and non-linear theories. When F > ζ2, the motion grows exponentially with time according to the linear theory but the non-linearity limits the motion to a finite amplitude steady state. The amplitude of the third mode, which is directly excited, is independent of F, whereas the amplitudes of the first and second modes, which are indirectly excited through the internal resonance, are functions of F. When ζ1 ⩽ F ⩽ ζ2, the motion decays or achieves a finite amplitude steady state depending on the initial conditions according to the non-linear theory, whereas it decays to zero according to the linear theory. This is an example of subcritical instability. In the case of a fundamental resonance of either the first or second mode, the trivial response is the only possible steady state. When F ⩽ ζ2, the motion decays to zero according to both linear and non-linear theories. When F > ζ2, the motion grows exponentially with time according to the linear theory but it is aperiodic according to the non-linear theory. Experiments are being planned to check these theoretical results.

  1. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta‐analysis and group level studies

    PubMed Central

    Bakbergenuly, Ilyas; Morgenthaler, Stephan

    2016-01-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062

  2. Combined solvent- and non-uniform temperature-programmed gradient liquid chromatography. I - A theoretical investigation.

    PubMed

    Gritti, Fabrice

    2016-11-18

    An new class of gradient liquid chromatography (GLC) is proposed and its performance is analyzed from a theoretical viewpoint. During the course of such gradients, both the solvent strength and the column temperature are simultaneously changed in time and space. The solvent and temperature gradients propagate along the chromatographic column at their own and independent linear velocity. This class of gradient is called combined solvent- and temperature-programmed gradient liquid chromatography (CST-GLC). The general expressions of the retention time, retention factor, and of the temporal peak width of the analytes at elution in CST-GLC are derived for linear solvent strength (LSS) retention models, modified van't Hoff retention behavior, linear and non-distorted solvent gradients, and for linear temperature gradients. In these conditions, the theory predicts that CST-GLC is equivalent to a unique and apparent dynamic solvent gradient. The apparent solvent gradient steepness is the sum of the solvent and temperature steepness. The apparent solvent linear velocity is the reciprocal of the steepness-averaged sum of the reciprocal of the actual solvent and temperature linear velocities. The advantage of CST-GLC over conventional GLC is demonstrated for the resolution of protein digests (peptide mapping) when applying smooth, retained, and linear acetonitrile gradients in combination with a linear temperature gradient (from 20°C to 90°C) using 300μm×150mm capillary columns packed with sub-2 μm particles. The benefit of CST-GLC is demonstrated when the temperature gradient propagates at the same velocity as the chromatographic speed. The experimental proof-of-concept for the realization of temperature ramps propagating at a finite and constant linear velocity is also briefly described. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Cesium and strontium loads into a combined sewer system from rainwater runoff.

    PubMed

    Kamei-Ishikawa, Nao; Yoshida, Daiki; Ito, Ayumi; Umita, Teruyuki

    2016-12-01

    In this study, combined sewage samples were taken with time in several rain events and sanitary sewage samples were taken with time in dry weather to calculate Cs and Sr loads to sewers from rainwater runoff. Cs and Sr in rainwater were present as particulate forms at first flush and the particulate Cs and Sr were mainly bound with inorganic suspended solids such as clay minerals in combined sewage samples. In addition, multiple linear regression analysis showed Cs and Sr loads from rainwater runoff could be estimated by the total amount of rainfall and antecedent dry weather days. The variation of the Sr load from rainwater to sewers was more sensitive to total amount of rainfall and antecedent dry weather days than that of the Cs load. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A pooled analysis of sequential therapies with sorafenib and sunitinib in metastatic renal cell carcinoma.

    PubMed

    Stenner, Frank; Chastonay, Rahel; Liewen, Heike; Haile, Sarah R; Cathomas, Richard; Rothermundt, Christian; Siciliano, Raffaele D; Stoll, Susanna; Knuth, Alexander; Buchler, Tomas; Porta, Camillo; Renner, Christoph; Samaras, Panagiotis

    2012-01-01

    To evaluate the optimal sequence for the receptor tyrosine kinase inhibitors (rTKIs) sorafenib and sunitinib in metastatic renal cell cancer. We performed a retrospective analysis of patients who had received sequential therapy with both rTKIs and integrated these results into a pooled analysis of available data from other publications. Differences in median progression-free survival (PFS) for first- (PFS1) and second-line treatment (PFS2), and for the combined PFS (PFS1 plus PFS2) were examined using weighted linear regression. In the pooled analysis encompassing 853 patients, the median combined PFS for first-line sunitinib and 2nd-line sorafenib (SuSo) was 12.1 months compared with 15.4 months for the reverse sequence (SoSu; 95% CI for difference 1.45-5.12, p = 0.0013). Regarding first-line treatment, no significant difference in PFS1 was noted regardless of which drug was initially used (0.62 months average increase on sorafenib, 95% CI for difference -1.01 to 2.26, p = 0.43). In second-line treatment, sunitinib showed a significantly longer PFS2 than sorafenib (average increase 2.66 months, 95% CI 1.02-4.3, p = 0.003). The SoSu sequence translates into a longer combined PFS compared to the SuSo sequence. Predominantly the superiority of sunitinib regarding PFS2 contributed to the longer combined PFS in sequential use. Copyright © 2012 S. Karger AG, Basel.

  5. Microhartree precision in density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Gulans, Andris; Kozhevnikov, Anton; Draxl, Claudia

    2018-04-01

    To address ultimate precision in density functional theory calculations we employ the full-potential linearized augmented plane-wave + local-orbital (LAPW + lo) method and justify its usage as a benchmark method. LAPW + lo and two completely unrelated numerical approaches, the multiresolution analysis (MRA) and the linear combination of atomic orbitals, yield total energies of atoms with mean deviations of 0.9 and 0.2 μ Ha , respectively. Spectacular agreement with the MRA is reached also for total and atomization energies of the G2-1 set consisting of 55 molecules. With the example of α iron we demonstrate the capability of LAPW + lo to reach μ Ha /atom precision also for periodic systems, which allows also for the distinction between the numerical precision and the accuracy of a given functional.

  6. VizieR Online Data Catalog: HARPS timeseries data for HD41248 (Jenkins+, 2014)

    NASA Astrophysics Data System (ADS)

    Jenkins, J. S.; Tuomi, M.

    2017-05-01

    We modeled the HARPS radial velocities of HD 42148 by adopting the analysis techniques and the statistical model applied in Tuomi et al. (2014, arXiv:1405.2016). This model contains Keplerian signals, a linear trend, a moving average component with exponential smoothing, and linear correlations with activity indices, namely, BIS, FWHM, and chromospheric activity S index. We applied our statistical model outlined above to the full data set of radial velocities for HD 41248, combining the previously published data in Jenkins et al. (2013ApJ...771...41J) with the newly published data in Santos et al. (2014, J/A+A/566/A35), giving rise to a total time series of 223 HARPS (Mayor et al. 2003Msngr.114...20M) velocities. (1 data file).

  7. Errors in Tsunami Source Estimation from Tide Gauges

    NASA Astrophysics Data System (ADS)

    Arcas, D.

    2012-12-01

    Linearity of tsunami waves in deep water can be assessed as a comparison of flow speed, u to wave propagation speed √gh. In real tsunami scenarios this evaluation becomes impractical due to the absence of observational data of tsunami flow velocities in shallow water. Consequently the extent of validity of the linear regime in the ocean is unclear. Linearity is the fundamental assumption behind tsunami source inversion processes based on linear combinations of unit propagation runs from a deep water propagation database (Gica et al., 2008). The primary tsunami elevation data for such inversion is usually provided by National Oceanic and Atmospheric (NOAA) deep-water tsunami detection systems known as DART. The use of tide gauge data for such inversions is more controversial due to the uncertainty of wave linearity at the depth of the tide gauge site. This study demonstrates the inaccuracies incurred in source estimation using tide gauge data in conjunction with a linear combination procedure for tsunami source estimation.

  8. Robust Combining of Disparate Classifiers Through Order Statistics

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  9. [Relation between Body Height and Combined Length of Manubrium and Mesosternum of Sternum Measured by CT-VRT in Southwest Han Population].

    PubMed

    Luo, Ying-zhen; Tu, Meng; Fan, Fei; Zheng, Jie-qian; Yang, Ming; Li, Tao; Zhang, Kui; Deng, Zhen-hua

    2015-06-01

    To establish the linear regression equation between body height and combined length of manubrium and mesostenum of sternum measured by CT volume rendering technique (CT-VRT) in southwest Han population. One hundred and sixty subjects, including 80 males and 80 females were selected from southwest Han population for routine CT-VRT (reconstruction thickness 1 mm) examination. The lengths of both manubrium and mesosternum were recorded, and the combined length of manubrium and mesosternum was equal to the algebraic sum of them. The sex-specific linear regression equations between the combined length of manubrium and mesosternum and the real body height of each subject were deduced. The sex-specific simple linear regression equations between the combined length of manubrium and mesostenum (x3) and body height (y) were established (male: y = 135.000+2.118 x3 and female: y = 120.790+2.808 x3). Both equations showed statistical significance (P < 0.05) with a 100% predictive accuracy. CT-VRT is an effective method for measurement of the index of sternum. The combined length of manubrium and mesosternum from CT-VRT can be used for body height estimation in southwest Han population.

  10. Pattern Recognition Approaches for Breast Cancer DCE-MRI Classification: A Systematic Review.

    PubMed

    Fusco, Roberta; Sansone, Mario; Filice, Salvatore; Carone, Guglielmo; Amato, Daniela Maria; Sansone, Carlo; Petrillo, Antonella

    2016-01-01

    We performed a systematic review of several pattern analysis approaches for classifying breast lesions using dynamic, morphological, and textural features in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Several machine learning approaches, namely artificial neural networks (ANN), support vector machines (SVM), linear discriminant analysis (LDA), tree-based classifiers (TC), and Bayesian classifiers (BC), and features used for classification are described. The findings of a systematic review of 26 studies are presented. The sensitivity and specificity are respectively 91 and 83 % for ANN, 85 and 82 % for SVM, 96 and 85 % for LDA, 92 and 87 % for TC, and 82 and 85 % for BC. The sensitivity and specificity are respectively 82 and 74 % for dynamic features, 93 and 60 % for morphological features, 88 and 81 % for textural features, 95 and 86 % for a combination of dynamic and morphological features, and 88 and 84 % for a combination of dynamic, morphological, and other features. LDA and TC have the best performance. A combination of dynamic and morphological features gives the best performance.

  11. Differential Entropy Preserves Variational Information of Near-Infrared Spectroscopy Time Series Associated With Working Memory

    PubMed Central

    Keshmiri, Soheil; Sumioka, Hidenubo; Yamazaki, Ryuji; Ishiguro, Hiroshi

    2018-01-01

    Neuroscience research shows a growing interest in the application of Near-Infrared Spectroscopy (NIRS) in analysis and decoding of the brain activity of human subjects. Given the correlation that is observed between the Blood Oxygen Dependent Level (BOLD) responses that are exhibited by the time series data of functional Magnetic Resonance Imaging (fMRI) and the hemoglobin oxy/deoxy-genation that is captured by NIRS, linear models play a central role in these applications. This, in turn, results in adaptation of the feature extraction strategies that are well-suited for discretization of data that exhibit a high degree of linearity, namely, slope and the mean as well as their combination, to summarize the informational contents of the NIRS time series. In this article, we demonstrate that these features are inefficient in capturing the variational information of NIRS data, limiting the reliability and the adequacy of the conclusion on their results. Alternatively, we propose the linear estimate of differential entropy of these time series as a natural representation of such information. We provide evidence for our claim through comparative analysis of the application of these features on NIRS data pertinent to several working memory tasks as well as naturalistic conversational stimuli. PMID:29922144

  12. Slope stability analysis using limit equilibrium method in nonlinear criterion.

    PubMed

    Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu

    2014-01-01

    In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci , and the parameter of intact rock m i . There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i , F decreases first and then increases.

  13. Slope Stability Analysis Using Limit Equilibrium Method in Nonlinear Criterion

    PubMed Central

    Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu

    2014-01-01

    In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci, and the parameter of intact rock m i. There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i, F decreases first and then increases. PMID:25147838

  14. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression.

    PubMed

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio

    2016-10-01

    We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Diagnostics for generalized linear hierarchical models in network meta-analysis.

    PubMed

    Zhao, Hong; Hodges, James S; Carlin, Bradley P

    2017-09-01

    Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. Efficiency of circulant diallels via mixed models in the selection of papaya genotypes resistant to foliar fungal diseases.

    PubMed

    Vivas, M; Silveira, S F; Viana, A P; Amaral, A T; Cardoso, D L; Pereira, M G

    2014-07-02

    Diallel crossing methods provide information regarding the performance of genitors between themselves and their hybrid combinations. However, with a large number of parents, the number of hybrid combinations that can be obtained and evaluated become limited. One option regarding the number of parents involved is the adoption of circulant diallels. However, information is lacking regarding diallel analysis using mixed models. This study aimed to evaluate the efficacy of the method of linear mixed models to estimate, for variable resistance to foliar fungal diseases, components of general and specific combining ability in a circulant table with different s values. Subsequently, 50 diallels were simulated for each s value, and the correlations and estimates of the combining abilities of the different diallel combinations were analyzed. The circulant diallel method using mixed modeling was effective in the classification of genitors regarding their combining abilities relative to the complete diallels. The numbers of crosses in which each genitor(s) will compose the circulant diallel and the estimated heritability affect the combining ability estimates. With three crosses per parent, it is possible to obtain good concordance (correlation above 0.8) between the combining ability estimates.

  18. Power and Sample Size Calculations for Testing Linear Combinations of Group Means under Variance Heterogeneity with Applications to Meta and Moderation Analyses

    ERIC Educational Resources Information Center

    Shieh, Gwowen; Jan, Show-Li

    2015-01-01

    The general formulation of a linear combination of population means permits a wide range of research questions to be tested within the context of ANOVA. However, it has been stressed in many research areas that the homogeneous variances assumption is frequently violated. To accommodate the heterogeneity of variance structure, the…

  19. Toward Control of Universal Scaling in Critical Dynamics

    DTIC Science & Technology

    2016-01-27

    program that aims to synergistically combine two powerful and very successful theories for non-linear stochastic dynamics of cooperative multi...RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Uwe Tauber Uwe C. T? uber , Michel Pleimling, Daniel J. Stilwell 611102 c. THIS PAGE The public reporting burden...to synergistically combine two powerful and very successful theories for non-linear stochastic dynamics of cooperative multi-component systems, namely

  20. Developing a Measure of General Academic Ability: An Application of Maximal Reliability and Optimal Linear Combination to High School Students' Scores

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali

    2015-01-01

    This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…

  1. Multilayer neural networks for reduced-rank approximation.

    PubMed

    Diamantaras, K I; Kung, S Y

    1994-01-01

    This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.

  2. A geometric approach to non-linear correlations with intrinsic scatter

    NASA Astrophysics Data System (ADS)

    Pihajoki, Pauli

    2017-12-01

    We propose a new mathematical model for n - k-dimensional non-linear correlations with intrinsic scatter in n-dimensional data. The model is based on Riemannian geometry and is naturally symmetric with respect to the measured variables and invariant under coordinate transformations. We combine the model with a Bayesian approach for estimating the parameters of the correlation relation and the intrinsic scatter. A side benefit of the approach is that censored and truncated data sets and independent, arbitrary measurement errors can be incorporated. We also derive analytic likelihoods for the typical astrophysical use case of linear relations in n-dimensional Euclidean space. We pay particular attention to the case of linear regression in two dimensions and compare our results to existing methods. Finally, we apply our methodology to the well-known MBH-σ correlation between the mass of a supermassive black hole in the centre of a galactic bulge and the corresponding bulge velocity dispersion. The main result of our analysis is that the most likely slope of this correlation is ∼6 for the data sets used, rather than the values in the range of ∼4-5 typically quoted in the literature for these data.

  3. Koopman Operator Framework for Time Series Modeling and Analysis

    NASA Astrophysics Data System (ADS)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  4. Combining ultrasonography and noncontrast helical computerized tomography to evaluate Holmium laser lithotripsy

    PubMed Central

    Mi, Jia; Li, Jie; Zhang, Qinglu; Wang, Xing; Liu, Hongyu; Cao, Yanlu; Liu, Xiaoyan; Sun, Xiao; Shang, Mengmeng; Liu, Qing

    2016-01-01

    Abstract The purpose of the study was to establish a mathematical model for correlating the combination of ultrasonography and noncontrast helical computerized tomography (NCHCT) with the total energy of Holmium laser lithotripsy. In this study, from March 2013 to February 2014, 180 patients with single urinary calculus were examined using ultrasonography and NCHCT before Holmium laser lithotripsy. The calculus location and size, acoustic shadowing (AS) level, twinkling artifact intensity (TAI), and CT value were all documented. The total energy of lithotripsy (TEL) and the calculus composition were also recorded postoperatively. Data were analyzed using Spearman's rank correlation coefficient, with the SPSS 17.0 software package. Multiple linear regression was also used for further statistical analysis. A significant difference in the TEL was observed between renal calculi and ureteral calculi (r = –0.565, P < 0.001), and there was a strong correlation between the calculus size and the TEL (r = 0.675, P < 0.001). The difference in the TEL between the calculi with and without AS was highly significant (r = 0.325, P < 0.001). The CT value of the calculi was significantly correlated with the TEL (r = 0.386, P < 0.001). A correlation between the TAI and TEL was also observed (r = 0.391, P < 0.001). Multiple linear regression analysis revealed that the location, size, and TAI of the calculi were related to the TEL, and the location and size were statistically significant predictors (adjusted r2 = 0.498, P < 0.001). A mathematical model correlating the combination of ultrasonography and NCHCT with TEL was established; this model may provide a foundation to guide the use of energy in Holmium laser lithotripsy. The TEL can be estimated by the location, size, and TAI of the calculus. PMID:27930563

  5. The development of a combined mathematical model to forecast the incidence of hepatitis E in Shanghai, China.

    PubMed

    Ren, Hong; Li, Jian; Yuan, Zheng-An; Hu, Jia-Yu; Yu, Yan; Lu, Yi-Han

    2013-09-08

    Sporadic hepatitis E has become an important public health concern in China. Accurate forecasting of the incidence of hepatitis E is needed to better plan future medical needs. Few mathematical models can be used because hepatitis E morbidity data has both linear and nonlinear patterns. We developed a combined mathematical model using an autoregressive integrated moving average model (ARIMA) and a back propagation neural network (BPNN) to forecast the incidence of hepatitis E. The morbidity data of hepatitis E in Shanghai from 2000 to 2012 were retrieved from the China Information System for Disease Control and Prevention. The ARIMA-BPNN combined model was trained with 144 months of morbidity data from January 2000 to December 2011, validated with 12 months of data January 2012 to December 2012, and then employed to forecast hepatitis E incidence January 2013 to December 2013 in Shanghai. Residual analysis, Root Mean Square Error (RMSE), normalized Bayesian Information Criterion (BIC), and stationary R square methods were used to compare the goodness-of-fit among ARIMA models. The Bayesian regularization back-propagation algorithm was used to train the network. The mean error rate (MER) was used to assess the validity of the combined model. A total of 7,489 hepatitis E cases was reported in Shanghai from 2000 to 2012. Goodness-of-fit (stationary R2=0.531, BIC= -4.768, Ljung-Box Q statistics=15.59, P=0.482) and parameter estimates were used to determine the best-fitting model as ARIMA (0,1,1)×(0,1,1)12. Predicted morbidity values in 2012 from best-fitting ARIMA model and actual morbidity data from 2000 to 2011 were used to further construct the combined model. The MER of the ARIMA model and the ARIMA-BPNN combined model were 0.250 and 0.176, respectively. The forecasted incidence of hepatitis E in 2013 was 0.095 to 0.372 per 100,000 population. There was a seasonal variation with a peak during January-March and a nadir during August-October. Time series analysis suggested a seasonal pattern of hepatitis E morbidity in Shanghai, China. An ARIMA-BPNN combined model was used to fit the linear and nonlinear patterns of time series data, and accurately forecast hepatitis E infections.

  6. Stress Induced in Periodontal Ligament under Orthodontic Loading (Part II): A Comparison of Linear Versus Non-Linear Fem Study.

    PubMed

    Hemanth, M; Deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B

    2015-09-01

    Simulation of periodontal ligament (PDL) using non-linear finite element method (FEM) analysis gives better insight into understanding of the biology of tooth movement. The stresses in the PDL were evaluated for intrusion and lingual root torque using non-linear properties. A three-dimensional (3D) FEM model of the maxillary incisors was generated using Solidworks modeling software. Stresses in the PDL were evaluated for intrusive and lingual root torque movements by 3D FEM using ANSYS software. These stresses were compared with linear and non-linear analyses. For intrusive and lingual root torque movements, distribution of stress over the PDL was within the range of optimal stress value as proposed by Lee, but was exceeding the force system given by Proffit as optimum forces for orthodontic tooth movement with linear properties. When same force load was applied in non-linear analysis, stresses were more compared to linear analysis and were beyond the optimal stress range as proposed by Lee for both intrusive and lingual root torque. To get the same stress as linear analysis, iterations were done using non-linear properties and the force level was reduced. This shows that the force level required for non-linear analysis is lesser than that of linear analysis.

  7. Interpretability of Multivariate Brain Maps in Linear Brain Decoding: Definition, and Heuristic Quantification in Multivariate Analysis of MEG Time-Locked Effects.

    PubMed

    Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea

    2016-01-01

    Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future.

  8. Interpretability of Multivariate Brain Maps in Linear Brain Decoding: Definition, and Heuristic Quantification in Multivariate Analysis of MEG Time-Locked Effects

    PubMed Central

    Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea

    2017-01-01

    Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future. PMID:28167896

  9. Problems in the Study of lineaments

    NASA Astrophysics Data System (ADS)

    Anokhin, Vladimir; Kholmyanskii, Michael

    2015-04-01

    The study of linear objects in upper crust, called lineaments, led at one time to a major scientific results - discovery of the planetary regmatic network, the birth of some new tectonic concepts, establishment of new search for signs of mineral deposits. But now lineaments studied not enough for such a promising research direction. Lineament geomorphology has a number of problems. 1.Terminology problems. Lineament theme still has no generally accepted terminology base. Different scientists have different interpretations even for the definition of lineament We offer an expanded definition for it: lineaments - line features of the earth's crust, expressed by linear landforms, geological linear forms, linear anomalies of physical fields may follow each other, associated with faults. The term "lineament" is not identical to the term "fault", but always lineament - reasonable suspicion to fault, and this suspicion is justified in most cases. The structure lineament may include only the objects that are at least presumably can be attributed to the deep processes. Specialists in the lineament theme can overcome terminological problems if together create a common terminology database. 2. Methodological problems. Procedure manual selection lineaments mainly is depiction of straight line segments along the axes of linear morphostructures on some cartographic basis. Reduce the subjective factors of manual selection is possible, following a few simple rules: - The choice of optimal projection, scale and quality of cartographic basis; - Selection of the optimal type of linear objects under study; - The establishment of boundary conditions for the allocation lineament (minimum length, maximum bending, the minimum length to width ratio, etc.); - Allocation of an increasing number of lineaments - for representative sampling and reduce the influence of random errors; - Ranking lineaments: fine lines (rank 3) combined to form larger lineaments rank 2; which, when combined capabilities in large lineaments rank 1; - Correlation of the resulting pattern of lineaments with a pattern already known of faults in the study area; - Separate allocation lineaments by several experts with correlation of the resulting schemes and create a common scheme. The problem of computer lineament allocation is not solved yet. Existing programs for lineament analysis is not so perfect to completely rely on them. In any of them, changing the initial parameters, we can get pictures lineaments any desired configuration. Also a high probability of heavy and hardly recognized systematic errors. In any case, computer lineament patterns after their creation should be subject to examination Real. 3. Interpretive problems. To minimize the distortion results of the lineament analysis is advisable to stick to a few techniques and rules: - use of visualization techniques, in particular, rose-charts, which are submitted azimuth and length of selected lineaments; - consistent downscaling of analysis. A preliminary analysis of a larger area that includes the area of interest with surroundings; - using the available information on the location of the already known faults and other tectonic linear objects of the study area; - comparison of the lineament scheme with schemes of other authors - can reduce the element of subjectivity in the schemes. The study of lineaments is a very promising direction of geomorfology and tectonics. Challenges facing the lineament theme, are solvable. To solve them, professionals should meet and talk to each other. The results of further work in this direction may exceed expectations.

  10. Optimizing methods for linking cinematic features to fMRI data.

    PubMed

    Kauttonen, Janne; Hlushchuk, Yevhen; Tikka, Pia

    2015-04-15

    One of the challenges of naturalistic neurosciences using movie-viewing experiments is how to interpret observed brain activations in relation to the multiplicity of time-locked stimulus features. As previous studies have shown less inter-subject synchronization across viewers of random video footage than story-driven films, new methods need to be developed for analysis of less story-driven contents. To optimize the linkage between our fMRI data collected during viewing of a deliberately non-narrative silent film 'At Land' by Maya Deren (1944) and its annotated content, we combined the method of elastic-net regularization with the model-driven linear regression and the well-established data-driven independent component analysis (ICA) and inter-subject correlation (ISC) methods. In the linear regression analysis, both IC and region-of-interest (ROI) time-series were fitted with time-series of a total of 36 binary-valued and one real-valued tactile annotation of film features. The elastic-net regularization and cross-validation were applied in the ordinary least-squares linear regression in order to avoid over-fitting due to the multicollinearity of regressors, the results were compared against both the partial least-squares (PLS) regression and the un-regularized full-model regression. Non-parametric permutation testing scheme was applied to evaluate the statistical significance of regression. We found statistically significant correlation between the annotation model and 9 ICs out of 40 ICs. Regression analysis was also repeated for a large set of cubic ROIs covering the grey matter. Both IC- and ROI-based regression analyses revealed activations in parietal and occipital regions, with additional smaller clusters in the frontal lobe. Furthermore, we found elastic-net based regression more sensitive than PLS and un-regularized regression since it detected a larger number of significant ICs and ROIs. Along with the ISC ranking methods, our regression analysis proved a feasible method for ordering the ICs based on their functional relevance to the annotated cinematic features. The novelty of our method is - in comparison to the hypothesis-driven manual pre-selection and observation of some individual regressors biased by choice - in applying data-driven approach to all content features simultaneously. We found especially the combination of regularized regression and ICA useful when analyzing fMRI data obtained using non-narrative movie stimulus with a large set of complex and correlated features. Copyright © 2015. Published by Elsevier Inc.

  11. Quantitative evaluation of phonetograms in the case of functional dysphonia.

    PubMed

    Airainer, R; Klingholz, F

    1993-06-01

    According to the laryngeal clinical findings, figures making up a scale were assigned to vocally trained and vocally untrained persons suffering from different types of functional dysphonia. The different types of dysphonia--from the manifested hypofunctional to the extreme hyperfunctional dysphonia--were classified by means of this scale. Besides, the subjects' phonetograms were measured and approximated by three ellipses, what rendered possible the definition of phonetogram parameters. The combining of selected phonetogram parameters to linear combinations served the purpose of a phonetographic evaluation. The linear combinations were to bring phonetographic and clinical evaluations into correspondence as accurately as possible. It was necessary to use different kinds of linear combinations for male and female singers and nonsingers. As a result of the reclassification of 71 and the new classification of 89 patients, it was possible to graduate the types of functional dysphonia by means of computer-aided phonetogram evaluation with a clinically acceptable error rate. This method proved to be an important supplement to the conventional diagnostics of functional dysphonia.

  12. An experimentally validated model for geometrically nonlinear plucking-based frequency up-conversion in energy harvesting

    NASA Astrophysics Data System (ADS)

    Kathpalia, B.; Tan, D.; Stern, I.; Erturk, A.

    2018-01-01

    It is well known that plucking-based frequency up-conversion can enhance the power output in piezoelectric energy harvesting by enabling cyclic free vibration at the fundamental bending mode of the harvester even for very low excitation frequencies. In this work, we present a geometrically nonlinear plucking-based framework for frequency up-conversion in piezoelectric energy harvesting under quasistatic excitations associated with low-frequency stimuli such as walking and similar rigid body motions. Axial shortening of the plectrum is essential to enable plucking excitation, which requires a nonlinear framework relating the plectrum parameters (e.g. overlap length between the plectrum and harvester) to the overall electrical power output. Von Kármán-type geometrically nonlinear deformation of the flexible plectrum cantilever is employed to relate the overlap length between the flexible (nonlinear) plectrum and the stiff (linear) harvester to the transverse quasistatic tip displacement of the plectrum, and thereby the tip load on the linear harvester in each plucking cycle. By combining the nonlinear plectrum mechanics and linear harvester dynamics with two-way electromechanical coupling, the electrical power output is obtained directly in terms of the overlap length. Experimental case studies and validations are presented for various overlap lengths and a set of electrical load resistance values. Further analysis results are reported regarding the combined effects of plectrum thickness and overlap length on the plucking force and harvested power output. The experimentally validated nonlinear plectrum-linear harvester framework proposed herein can be employed to design and optimize frequency up-conversion by properly choosing the plectrum parameters (geometry, material, overlap length, etc) as well as the harvester parameters.

  13. Utilizing population controls in rare-variant case-parent association tests.

    PubMed

    Jiang, Yu; Satten, Glen A; Han, Yujun; Epstein, Michael P; Heinzen, Erin L; Goldstein, David B; Allen, Andrew S

    2014-06-05

    There is great interest in detecting associations between human traits and rare genetic variation. To address the low power implicit in single-locus tests of rare genetic variants, many rare-variant association approaches attempt to accumulate information across a gene, often by taking linear combinations of single-locus contributions to a statistic. Using the right linear combination is key-an optimal test will up-weight true causal variants, down-weight neutral variants, and correctly assign the direction of effect for causal variants. Here, we propose a procedure that exploits data from population controls to estimate the linear combination to be used in an case-parent trio rare-variant association test. Specifically, we estimate the linear combination by comparing population control allele frequencies with allele frequencies in the parents of affected offspring. These estimates are then used to construct a rare-variant transmission disequilibrium test (rvTDT) in the case-parent data. Because the rvTDT is conditional on the parents' data, using parental data in estimating the linear combination does not affect the validity or asymptotic distribution of the rvTDT. By using simulation, we show that our new population-control-based rvTDT can dramatically improve power over rvTDTs that do not use population control information across a wide variety of genetic architectures. It also remains valid under population stratification. We apply the approach to a cohort of epileptic encephalopathy (EE) trios and find that dominant (or additive) inherited rare variants are unlikely to play a substantial role within EE genes previously identified through de novo mutation studies. Copyright © 2014 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  14. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  15. 3D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.

    1985-01-01

    The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.

  16. A new state space model for the NASA/JPL 70-meter antenna servo controls

    NASA Technical Reports Server (NTRS)

    Hill, R. E.

    1987-01-01

    A control axis referenced model of the NASA/JPL 70-m antenna structure is combined with the dynamic equations of servo components to produce a comprehansive state variable (matrix) model of the coupled system. An interactive Fortran program for generating the linear system model and computing its salient parameters is described. Results are produced in a state variable, block diagram, and in factored transfer function forms to facilitate design and analysis by classical as well as modern control methods.

  17. Passenger comfort during terminal-area flight maneuvers. M.S. Thesis.

    NASA Technical Reports Server (NTRS)

    Schoonover, W. E., Jr.

    1976-01-01

    A series of flight experiments was conducted to obtain passenger subjective responses to closely controlled and repeatable flight maneuvers. In 8 test flights, reactions were obtained from 30 passenger subjects to a wide range of terminal-area maneuvers, including descents, turns, decelerations, and combinations thereof. Analysis of the passenger rating variance indicated that the objective of a repeatable flight passenger environment was achieved. Multiple linear regression models developed from the test data were used to define maneuver motion boundaries for specified degrees of passenger acceptance.

  18. High Frequency Excitation for Cavity Flow Control: Combined Experiments and Linear Stability Analysis

    DTIC Science & Technology

    2009-06-30

    Astronautics Journal 18, 1959 (1970). 3 D. Sahoo, A. M. Annaswamy, and F . Alvi, "Active store trajectory control in supersonic cavities using microjets and...and A. F . Ghoneim, "Numerical simulation of the convective instability in a dump combustor", American Institute of Aeronautics and Astronautics Journal...29. 911 (1991). 6 P. C. Kriesels, M. C. A. M. Peters, A. Hirschberg, A. P. J. Wijnands, A. Iafrati, G. Riccardi , R. Piva, and J. C. Bruggeman

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanty, Subhasish; Majumdar, Saurindranath

    Irradiation creep plays a major role in the structural integrity of the graphite components in high temperature gas cooled reactors. Finite element procedures combined with a suitable irradiation creep model can be used to simulate the time-integrated structural integrity of complex shapes, such as the reactor core graphite reflector and fuel bricks. In the present work a comparative study was undertaken to understand the effect of linear and nonlinear irradiation creep on results of finite element based stress analysis. Numerical results were generated through finite element simulations of a typical graphite reflector.

  20. Development of the triplet singularity for the analysis of wings and bodies in supersonic flow

    NASA Technical Reports Server (NTRS)

    Woodward, F. A.

    1981-01-01

    A supersonic triplet singularity was developed which eliminates internal waves generated by panels having supersonic edges. The triplet is a linear combination of source and vortex distributions which gives directional properties to the perturbation flow field surrounding the panel. The theoretical development of the triplet singularity is described together with its application to the calculation of surface pressures on wings and bodies. Examples are presented comparing the results of the new method with other supersonic methods and with experimental data.

  1. A Combined Finite-Element/Discrete-Particle Analysis of a Side-Vent-Channel-Based Concept for Improved Blast-Survivability of Light Tactical Vehicles

    DTIC Science & Technology

    2013-01-01

    design of side- vent-channels. The results obtained confirmed the beneficial effects of the side-vent-channels in reducing the blast momentum , although...confirmed the beneficial effects of the side-vent-channels in reducing the blast momentum , although the extent of these effects is relatively small (3...products against the surrounding medium is associated with exchange of linear momentum and various energy components (e.g. potential, thermal

  2. Application of Mathematical Signal Processing Techniques to Mission Systems. (l’Application des techniques mathematiques du traitement du signal aux systemes de conduite des missions)

    DTIC Science & Technology

    1999-11-01

    represents the linear time invariant (LTI) response of the combined analysis /synthesis system while the second repre- sents the aliasing introduced into...effectively to implement voice scrambling systems based on time - frequency permutation . The most general form of such a system is shown in Fig. 22 where...92201 NEUILLY-SUR-SEINE CEDEX, FRANCE RTO LECTURE SERIES 216 Application of Mathematical Signal Processing Techniques to Mission Systems (1

  3. Extracting falsifiable predictions from sloppy models.

    PubMed

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  4. Investigation of empirical damping laws for the space shuttle

    NASA Technical Reports Server (NTRS)

    Bernstein, E. L.

    1973-01-01

    An analysis of dynamic test data from vibration testing of a number of aerospace vehicles was made to develop an empirical structural damping law. A systematic attempt was made to fit dissipated energy/cycle to combinations of all dynamic variables. The best-fit laws for bending, torsion, and longitudinal motion are given, with error bounds. A discussion and estimate are made of error sources. Programs are developed for predicting equivalent linear structural damping coefficients and finding the response of nonlinearly damped structures.

  5. Diagnosis of Enzyme Inhibition Using Excel Solver: A Combined Dry and Wet Laboratory Exercise

    ERIC Educational Resources Information Center

    Dias, Albino A.; Pinto, Paula A.; Fraga, Irene; Bezerra, Rui M. F.

    2014-01-01

    In enzyme kinetic studies, linear transformations of the Michaelis-Menten equation, such as the Lineweaver-Burk double-reciprocal transformation, present some constraints. The linear transformation distorts the experimental error and the relationship between "x" and "y" axes; consequently, linear regression of transformed data…

  6. Differentially pumped dual linear quadrupole ion trap mass spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, Benjamin C.; Kenttamaa, Hilkka I.

    The present disclosure provides a new tandem mass spectrometer and methods of using the same for analyzing charged particles. The differentially pumped dual linear quadrupole ion trap mass spectrometer of the present disclose includes a combination of two linear quadrupole (LQIT) mass spectrometers with differentially pumped vacuum chambers.

  7. A penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography.

    PubMed

    Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn

    2007-01-01

    Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.

  8. [Analysis of stress in periodontal ligament of the maxillary first molar on distal movement by nonlinear finite element method].

    PubMed

    Dong, Jing; Zhang, Zhe-chen; Zhou, Guo-liang

    2015-06-01

    To analyze the stress distribution in periodontal ligament of maxillary first molar during distal movement with nonlinear finite element analysis, and to compare it with the result of linear finite element analysis, consequently to provide biomechanical evidence for clinical application. The 3-D finite element model including a maxillary first molar, periodontal ligament, alveolar bone, cancellous bone, cortical bone and a buccal tube was built up by using Mimics, Geomagic, ProE and Ansys Workbench. The material of periodontal ligament was set as nonlinear material and linear elastic material, respectively. Loads of different combinations were applied to simulate the clinical situation of distalizing the maxillary first molar. There were channels of low stress in peak distribution of Von Mises equivalent stress and compressive stress of periodontal ligament in nonlinear finite element model. The peak of Von Mises equivalent stress was lower when it was satisfied that Mt/F minus Mr/F approximately equals 2. The peak of compressive stress was lower when it was satisfied that Mt/F was approximately equal to Mr/F. The relative stress of periodontal ligament was higher and violent in linear finite element model and there were no channels of low stress in peak distribution. There are channels in which stress of periodontal ligament is lower. The condition of low stress should be satisfied by applied M/F during the course of distalizing the maxillary first molar.

  9. Differential Dynamic Engagement within 24 SH3 Domain: Peptide Complexes Revealed by Co-Linear Chemical Shift Perturbation Analysis

    PubMed Central

    Stollar, Elliott J.; Lin, Hong; Davidson, Alan R.; Forman-Kay, Julie D.

    2012-01-01

    There is increasing evidence for the functional importance of multiple dynamically populated states within single proteins. However, peptide binding by protein-protein interaction domains, such as the SH3 domain, has generally been considered to involve the full engagement of peptide to the binding surface with minimal dynamics and simple methods to determine dynamics at the binding surface for multiple related complexes have not been described. We have used NMR spectroscopy combined with isothermal titration calorimetry to comprehensively examine the extent of engagement to the yeast Abp1p SH3 domain for 24 different peptides. Over one quarter of the domain residues display co-linear chemical shift perturbation (CCSP) behavior, in which the position of a given chemical shift in a complex is co-linear with the same chemical shift in the other complexes, providing evidence that each complex exists as a unique dynamic rapidly inter-converting ensemble. The extent the specificity determining sub-surface of AbpSH3 is engaged as judged by CCSP analysis correlates with structural and thermodynamic measurements as well as with functional data, revealing the basis for significant structural and functional diversity amongst the related complexes. Thus, CCSP analysis can distinguish peptide complexes that may appear identical in terms of general structure and percent peptide occupancy but have significant local binding differences across the interface, affecting their ability to transmit conformational change across the domain and resulting in functional differences. PMID:23251481

  10. Mean-square state and parameter estimation for stochastic linear systems with Gaussian and Poisson noises

    NASA Astrophysics Data System (ADS)

    Basin, M.; Maldonado, J. J.; Zendejo, O.

    2016-07-01

    This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.

  11. [Comparison of film-screen combinations with contrast detail diagram and interactive image analysis. 2: Linear assessment of grey scale ranges with interactive image analysis].

    PubMed

    Stamm, G; Eichbaum, G; Hagemann, G

    1997-09-01

    The following three screen-film combinations were compared: a) a combination of anticrossover film and UV-light emitting screens, b) a combination of blue-light emitting screens and film, and c) a conventional green fluorescing screen-film combination. Radiographs of a specially designed plexiglass phantom (0.2 x 0.2 x 0.12 m3) with bar patterns of lead and plaster and of air, respectively were obtained using the following parameters: 12 pulse generator, 0.6 mm focus size, 4.7 mm aluminum pre-filter, a grid with 40 lines/cm (12:1) and a focus-detector distance of 1.15 m. Image analysis was performed using an IBAS system and a Zeiss Kontron computer. Display conditions were the following: display distance 0.12 m, a vario film objective 35/70 (Zeiss), a video camera tube with a PbO photocathode, 625 lines (Siemens Heimann), an IBAS image matrix of 512 x 512 pixels with a resolution of 7 lines/mm, the projected matrix area was 5000 microns2. Grey scale ranges were measured on a line perpendicular to the grouped bar patterns. The difference between the maximum and minimum density value served as signal. The spatial resolution of the detector system was measured when the signal value was three times higher than the standard deviation of the means of multiple density measurements. The results showed considerable advantages of the two new screen-film combinations as compared to the conventional screen-film combination. The result was contradictory to the findings with pure visual assessment of thresholds (part I) that had found no differences. The authors concluded that (automatic) interactive image analysis algorithms serve as an objective measure and are specifically advantageous when small differences in image quality are to be evaluated.

  12. Exploration of computational methods for classification of movement intention during human voluntary movement from single trial EEG.

    PubMed

    Bai, Ou; Lin, Peter; Vorbach, Sherry; Li, Jiang; Furlani, Steve; Hallett, Mark

    2007-12-01

    To explore effective combinations of computational methods for the prediction of movement intention preceding the production of self-paced right and left hand movements from single trial scalp electroencephalogram (EEG). Twelve naïve subjects performed self-paced movements consisting of three key strokes with either hand. EEG was recorded from 128 channels. The exploration was performed offline on single trial EEG data. We proposed that a successful computational procedure for classification would consist of spatial filtering, temporal filtering, feature selection, and pattern classification. A systematic investigation was performed with combinations of spatial filtering using principal component analysis (PCA), independent component analysis (ICA), common spatial patterns analysis (CSP), and surface Laplacian derivation (SLD); temporal filtering using power spectral density estimation (PSD) and discrete wavelet transform (DWT); pattern classification using linear Mahalanobis distance classifier (LMD), quadratic Mahalanobis distance classifier (QMD), Bayesian classifier (BSC), multi-layer perceptron neural network (MLP), probabilistic neural network (PNN), and support vector machine (SVM). A robust multivariate feature selection strategy using a genetic algorithm was employed. The combinations of spatial filtering using ICA and SLD, temporal filtering using PSD and DWT, and classification methods using LMD, QMD, BSC and SVM provided higher performance than those of other combinations. Utilizing one of the better combinations of ICA, PSD and SVM, the discrimination accuracy was as high as 75%. Further feature analysis showed that beta band EEG activity of the channels over right sensorimotor cortex was most appropriate for discrimination of right and left hand movement intention. Effective combinations of computational methods provide possible classification of human movement intention from single trial EEG. Such a method could be the basis for a potential brain-computer interface based on human natural movement, which might reduce the requirement of long-term training. Effective combinations of computational methods can classify human movement intention from single trial EEG with reasonable accuracy.

  13. GNSS triple-frequency geometry-free and ionosphere-free track-to-track ambiguities

    NASA Astrophysics Data System (ADS)

    Wang, Kan; Rothacher, Markus

    2015-06-01

    During the last few years, more and more GNSS satellites have become available sending signals on three or even more frequencies. Examples are the GPS Block IIF and the Galileo In-Orbit-Validation (IOV) satellites. Various investigations have been performed to make use of the increasing number of frequencies to find a compromise between eliminating different error sources and minimizing the noise level, including the investigations in the triple-frequency geometry-free (GF) and ionosphere-free (IF) linear combinations, which eliminate all the geometry-related errors and the first-order term of the ionospheric delays. In contrast to the double-difference GF and IF ambiguity resolution, the resolution of the so-called track-to-track GF and IF ambiguities between two tracks of a satellite observed by the same station only requires one receiver and one satellite. Most of the remaining errors like receiver and satellite delays (electronics, cables, etc.) are eliminated, if they are not changing rapidly in time, and the noise level is reduced theoretically by a factor of square root of two compared to double-differences. This paper presents first results concerning track-to-track ambiguity resolution using triple-frequency GF and IF linear combinations based on data from the Multi-GNSS Experiment (MGEX) from April 29 to May 9, 2012 and from December 23 to December 29, 2012. This includes triple-frequency phase and code observations with different combinations of receiver tracking modes. The results show that it is possible to resolve the combined track-to-track ambiguities of the best two triple-frequency GF and IF linear combinations for the Galileo frequency triplet E1, E5b and E5a with more than 99.6% of the fractional ambiguities for the best linear combination being located within ± 0.03 cycles and more than 98.8% of the fractional ambiguities for the second best linear combination within ± 0.2 cycles, while the fractional parts of the ambiguities for the GPS frequency triplet L1, L2 and L5 are more disturbed by errors as e.g. the uncalibrated Phase Center Offsets (PCOs) and Phase Center Variations (PCVs), that have not been considered. The best two GF and IF linear combinations between tracks are helpful to detect problems in data and receivers. Furthermore, resolving the track-to-track ambiguities is helpful to connect the single-receiver ambiguities on the normal equation level and to improve ambiguity resolution.

  14. Player's success prediction in rugby union: From youth performance to senior level placing.

    PubMed

    Fontana, Federico Y; Colosio, Alessandro L; Da Lozzo, Giorgio; Pogliaghi, Silvia

    2017-04-01

    The study questioned if and to what extent specific anthropometric and functional characteristics measured in youth draft camps, can accurately predict subsequent career progression in rugby union. Original research. Anthropometric and functional characteristics of 531 male players (U16) were retrospectively analysed in relation to senior level team representation at age 21-24. Players were classified as International (Int: National team and international clubs) or National (Nat: 1st, 2nd and other divisions and dropout). Multivariate analysis of variance (one-way MANOVA) tested differences between Int and Nat, along a combination of anthropometric (body mass, height, body fat, fat-free mass) and functional variables (SJ, CMJ, t 15m , t 30m , VO 2max ). A discriminant function (DF) was determined to predict group assignment based on the linear combination of variables that best discriminate groups. Correct level assignment was expressed as % hit rate. A combination of anthropometric and functional characteristics reflects future level assignment (Int vs. Nat). Players' success can be accurately predicted (hit rate=81% and 77% for Int and Nat respectively) by a DF that combines anthropometric and functional variables as measured at ∼15 years of age, percent body fat and speed being the most influential predictors of group stratification. Within a group of 15 year-olds with exceptional physical characteristics, future players' success can be predicted using a linear combination of anthropometric and functional variables, among which a lower percent body fat and higher speed over a 15m sprint provide the most important predictors of the highest career success. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  15. Temporal dynamic of malaria in a suburban area along the Niger River.

    PubMed

    Sissoko, Mahamadou Soumana; Sissoko, Kourane; Kamate, Bourama; Samake, Yacouba; Goita, Siaka; Dabo, Abdoulaye; Yena, Mama; Dessay, Nadine; Piarroux, Renaud; Doumbo, Ogobara K; Gaudart, Jean

    2017-10-23

    Even if rainfall and temperature are factors classically associated to malaria, little is known about other meteorological factors, their variability and combinations related to malaria, in association with river height variations. Furthermore, in suburban area, urbanization and growing population density should be assessed in relation to these environmental factors. The aim of this study was to assess the impact of combined environmental, meteorological and hydrological factors on malaria incidence through time in the context of urbanization. Population observational data were prospectively collected. Clinical malaria was defined as the presence of parasites in addition to clinical symptoms. Meteorological and hydrological factors were measured daily. For each factors variation indices were estimated. Urbanization was yearly estimated assessing satellite imaging and field investigations. Principal component analysis was used for dimension reduction and factors combination. Lags between malaria incidences and the main components were assessed by cross-correlation functions. Generalized additive model was used to assess relative impact of different environmental components, taking into account lags, and modelling non-linear relationships. Change-point analysis was used to determine transmission periods within years. Malaria incidences were dominated by annual periodicity and varied through time without modification of the dynamic, with no impact of the urbanization. The main meteorological factor associated with malaria was a combination of evaporation, humidity and rainfall, with a lag of 3 months. The relationship between combined temperature factors showed a linear impact until reaching high temperatures limiting malaria incidence, with a lag 3.25 months. Height and variation of the river were related to malaria incidence (respectively 6 week lag and no lag). The study emphasizes no decreasing trend of malaria incidence despite accurate access to care and control strategies in accordance to international recommendations. Furthermore, no decreasing trend was showed despite the urbanization of the area. Malaria transmission remain increase 3 months after the beginning of the dry season. Addition to evaporation versus humidity/rainfall, nonlinear relationship for temperature and river height and variations have to be taken into account when implementing malaria control programmes.

  16. Archetypes for Organisational Safety

    NASA Technical Reports Server (NTRS)

    Marais, Karen; Leveson, Nancy G.

    2003-01-01

    We propose a framework using system dynamics to model the dynamic behavior of organizations in accident analysis. Most current accident analysis techniques are event-based and do not adequately capture the dynamic complexity and non-linear interactions that characterize accidents in complex systems. In this paper we propose a set of system safety archetypes that model common safety culture flaws in organizations, i.e., the dynamic behaviour of organizations that often leads to accidents. As accident analysis and investigation tools, the archetypes can be used to develop dynamic models that describe the systemic and organizational factors contributing to the accident. The archetypes help clarify why safety-related decisions do not always result in the desired behavior, and how independent decisions in different parts of the organization can combine to impact safety.

  17. Experimental study and analysis of lubricants dispersed with nano Cu and TiO2 in a four-stroke two wheeler

    NASA Astrophysics Data System (ADS)

    Sarma, Pullela K.; Srinivas, Vadapalli; Rao, Vedula Dharma; Kumar, Ayyagari Kiran

    2011-12-01

    The present investigation summarizes detailed experimental studies with standard lubricants of commercial quality known as Racer-4 of Hindustan Petroleum Corporation (India) dispersed with different mass concentrations of nanoparticles of Cu and TiO2. The test bench is fabricated with a four-stroke Hero-Honda motorbike hydraulically loaded at the rear wheel with proper instrumentation to record the fuel consumption, the load on the rear wheel, and the linear velocity. The whole range of data obtained on a stationery bike is subjected to regression analysis to arrive at various relationships between fuel consumption as a function of brake power, linear velocity, and percentage mass concentration of nanoparticles in the lubricant. The empirical relation correlates with the observed data with reasonable accuracy. Further, extension of the analysis by developing a mathematical model has revealed a definite improvement in brake thermal efficiency which ultimately affects the fuel economy by diminishing frictional power in the system with the introduction of nanoparticles into the lubricant. The performance of the engine seems to be better with nano Cu-Racer-4 combination than the one with nano TiO2.

  18. Experimental study and analysis of lubricants dispersed with nano Cu and TiO2 in a four-stroke two wheeler

    PubMed Central

    2011-01-01

    The present investigation summarizes detailed experimental studies with standard lubricants of commercial quality known as Racer-4 of Hindustan Petroleum Corporation (India) dispersed with different mass concentrations of nanoparticles of Cu and TiO2. The test bench is fabricated with a four-stroke Hero-Honda motorbike hydraulically loaded at the rear wheel with proper instrumentation to record the fuel consumption, the load on the rear wheel, and the linear velocity. The whole range of data obtained on a stationery bike is subjected to regression analysis to arrive at various relationships between fuel consumption as a function of brake power, linear velocity, and percentage mass concentration of nanoparticles in the lubricant. The empirical relation correlates with the observed data with reasonable accuracy. Further, extension of the analysis by developing a mathematical model has revealed a definite improvement in brake thermal efficiency which ultimately affects the fuel economy by diminishing frictional power in the system with the introduction of nanoparticles into the lubricant. The performance of the engine seems to be better with nano Cu-Racer-4 combination than the one with nano TiO2. PMID:21711765

  19. Finite Elements Analysis of a Composite Semi-Span Test Article With and Without Discrete Damage

    NASA Technical Reports Server (NTRS)

    Lovejoy, Andrew E.; Jegley, Dawn C. (Technical Monitor)

    2000-01-01

    AS&M Inc. performed finite element analysis, with and without discrete damage, of a composite semi-span test article that represents the Boeing 220-passenger transport aircraft composite semi-span test article. A NASTRAN bulk data file and drawings of the test mount fixtures and semi-span components were utilized to generate the baseline finite element model. In this model, the stringer blades are represented by shell elements, and the stringer flanges are combined with the skin. Numerous modeling modifications and discrete source damage scenarios were applied to the test article model throughout the course of the study. This report details the analysis method and results obtained from the composite semi-span study. Analyses were carried out for three load cases: Braked Roll, LOG Down-Bending and 2.5G Up-Bending. These analyses included linear and nonlinear static response, as well as linear and nonlinear buckling response. Results are presented in the form of stress and strain plots. factors of safety for failed elements, buckling loads and modes, deflection prediction tables and plots, and strainage prediction tables and plots. The collected results are presented within this report for comparison to test results.

  20. Experimental study and analysis of lubricants dispersed with nano Cu and TiO2 in a four-stroke two wheeler.

    PubMed

    Sarma, Pullela K; Srinivas, Vadapalli; Rao, Vedula Dharma; Kumar, Ayyagari Kiran

    2011-03-17

    The present investigation summarizes detailed experimental studies with standard lubricants of commercial quality known as Racer-4 of Hindustan Petroleum Corporation (India) dispersed with different mass concentrations of nanoparticles of Cu and TiO2. The test bench is fabricated with a four-stroke Hero-Honda motorbike hydraulically loaded at the rear wheel with proper instrumentation to record the fuel consumption, the load on the rear wheel, and the linear velocity. The whole range of data obtained on a stationery bike is subjected to regression analysis to arrive at various relationships between fuel consumption as a function of brake power, linear velocity, and percentage mass concentration of nanoparticles in the lubricant. The empirical relation correlates with the observed data with reasonable accuracy. Further, extension of the analysis by developing a mathematical model has revealed a definite improvement in brake thermal efficiency which ultimately affects the fuel economy by diminishing frictional power in the system with the introduction of nanoparticles into the lubricant. The performance of the engine seems to be better with nano Cu-Racer-4 combination than the one with nano TiO2.

  1. The general linear inverse problem - Implication of surface waves and free oscillations for earth structure.

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.

    1972-01-01

    The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.

  2. Warping of a computerized 3-D atlas to match brain image volumes for quantitative neuroanatomical and functional analysis

    NASA Astrophysics Data System (ADS)

    Evans, Alan C.; Dai, Weiqian; Collins, D. Louis; Neelin, Peter; Marrett, Sean

    1991-06-01

    We describe the implementation, experience and preliminary results obtained with a 3-D computerized brain atlas for topographical and functional analysis of brain sub-regions. A volume-of-interest (VOI) atlas was produced by manual contouring on 64 adjacent 2 mm-thick MRI slices to yield 60 brain structures in each hemisphere which could be adjusted, originally by global affine transformation or local interactive adjustments, to match individual MRI datasets. We have now added a non-linear deformation (warp) capability (Bookstein, 1989) into the procedure for fitting the atlas to the brain data. Specific target points are identified in both atlas and MRI spaces which define a continuous 3-D warp transformation that maps the atlas on to the individual brain image. The procedure was used to fit MRI brain image volumes from 16 young normal volunteers. Regional volume and positional variability were determined, the latter in such a way as to assess the extent to which previous linear models of brain anatomical variability fail to account for the true variation among normal individuals. Using a linear model for atlas deformation yielded 3-D fits of the MRI data which, when pooled across subjects and brain regions, left a residual mis-match of 6 - 7 mm as compared to the non-linear model. The results indicate a substantial component of morphometric variability is not accounted for by linear scaling. This has profound implications for applications which employ stereotactic coordinate systems which map individual brains into a common reference frame: quantitative neuroradiology, stereotactic neurosurgery and cognitive mapping of normal brain function with PET. In the latter case, the combination of a non-linear deformation algorithm would allow for accurate measurement of individual anatomic variations and the inclusion of such variations in inter-subject averaging methodologies used for cognitive mapping with PET.

  3. On a 3-D singularity element for computation of combined mode stress intensities

    NASA Technical Reports Server (NTRS)

    Atluri, S. N.; Kathiresan, K.

    1976-01-01

    A special three-dimensional singularity element is developed for the computation of combined modes 1, 2, and 3 stress intensity factors, which vary along an arbitrarily curved crack front in three dimensional linear elastic fracture problems. The finite element method is based on a displacement-hybrid finite element model, based on a modified variational principle of potential energy, with arbitrary element interior displacements, interelement boundary displacements, and element boundary tractions as variables. The special crack-front element used in this analysis contains the square root singularity in strains and stresses, where the stress-intensity factors K(1), K(2), and K(3) are quadratically variable along the crack front and are solved directly along with the unknown nodal displacements.

  4. Terrorism as a process: a critical review of Moghaddam's "Staircase to Terrorism".

    PubMed

    Lygre, Ragnhild B; Eid, Jarle; Larsson, Gerry; Ranstorp, Magnus

    2011-12-01

    This study reviews empirical evidence for Moghaddam's model "Staircase to Terrorism," which portrays terrorism as a process of six consecutive steps culminating in terrorism. An extensive literature search, where 2,564 publications on terrorism were screened, resulted in 38 articles which were subject to further analysis. The results showed that while most of the theories and processes linked to Moghaddam's model are supported by empirical evidence, the proposed transitions between the different steps are not. These results may question the validity of a linear stepwise model and may suggest that a combination of mechanisms/factors could combine in different ways to produce terrorism. © 2011 The Authors. Scandinavian Journal of Psychology © 2011 The Scandinavian Psychological Associations.

  5. Virtual Assessment of Sex: Linear and Angular Traits of the Mandibular Ramus Using Three-Dimensional Computed Tomography.

    PubMed

    Inci, Ercan; Ekizoglu, Oguzhan; Turkay, Rustu; Aksoy, Sema; Can, Ismail Ozgur; Solmaz, Dilek; Sayin, Ibrahim

    2016-10-01

    Morphometric analysis of the mandibular ramus (MR) provides highly accurate data to discriminate sex. The objective of this study was to demonstrate the utility and accuracy of MR morphometric analysis for sex identification in a Turkish population.Four hundred fifteen Turkish patients (18-60 y; 201 male and 214 female) who had previously had multidetector computed tomography scans of the cranium were included in the study. Multidetector computed tomography images were obtained using three-dimensional reconstructions and a volume-rendering technique, and 8 linear and 3 angular values were measured. Univariate, bivariate, and multivariate discriminant analyses were performed, and the accuracy rates for determining sex were calculated.Mandibular ramus values produced high accuracy rates of 51% to 95.6%. Upper ramus vertical height had the highest rate at 95.6%, and bivariate analysis showed 89.7% to 98.6% accuracy rates with the highest ratios of mandibular flexure upper border and maximum ramus breadth. Stepwise discrimination analysis gave a 99% accuracy rate for all MR variables.Our study showed that the MR, in particular morphometric measures of the upper part of the ramus, can provide valuable data to determine sex in a Turkish population. The method combines both anthropological and radiologic studies.

  6. Electroosmosis of viscoelastic fluids over charge modulated surfaces in narrow confinements

    NASA Astrophysics Data System (ADS)

    Ghosh, Uddipta; Chakraborty, Suman

    2015-06-01

    In the present work, we attempt to analyze the electroosmotic flow of a viscoelastic fluid, following quasi-linear constitutive behavior, over charge modulated surfaces in narrow confinements. We obtain analytical solutions for the flow field for thin electrical double layer (EDL) limit through asymptotic analysis for small Deborah numbers. We show that a combination of matched and regular asymptotic expansion is needed for the thin EDL limit. We subsequently determine the modified Smoluchowski slip velocity for viscoelastic fluids and show that the quasi-linear nature of the constitutive behavior adds to the periodicity of the flow. We also obtain the net throughput in the channel and demonstrate its relative decrement as compared to that of a Newtonian fluid. Our results may have potential implications towards augmenting microfluidic mixing by exploiting electrokinetic transport of viscoelastic fluids over charge modulated surfaces.

  7. Fundamental Analysis of the Linear Multiple Regression Technique for Quantification of Water Quality Parameters from Remote Sensing Data. Ph.D. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H., III

    1977-01-01

    Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.

  8. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    PubMed Central

    Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.

    2013-01-01

    A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777

  9. Optimal monochromatic color combinations for fusion imaging of FDG-PET and diffusion-weighted MR images.

    PubMed

    Kamei, Ryotaro; Watanabe, Yuji; Sagiyama, Koji; Isoda, Takuro; Togao, Osamu; Honda, Hiroshi

    2018-05-23

    To investigate the optimal monochromatic color combination for fusion imaging of FDG-PET and diffusion-weighted MR images (DW) regarding lesion conspicuity of each image. Six linear monochromatic color-maps of red, blue, green, cyan, magenta, and yellow were assigned to each of the FDG-PET and DW images. Total perceptual color differences of the lesions were calculated based on the lightness and chromaticity measured with the photometer. Visual lesion conspicuity was also compared among the PET-only, DW-only and PET-DW-double positive portions with mean conspicuity scores. Statistical analysis was performed with a one-way analysis of variance and Spearman's rank correlation coefficient. Among all the 12 possible monochromatic color-map combinations, the 3 combinations of red/cyan, magenta/green, and red/green produced the highest conspicuity scores. Total color differences between PET-positive and double-positive portions correlated with conspicuity scores (ρ = 0.2933, p < 0.005). Lightness differences showed a significant negative correlation with conspicuity scores between the PET-only and DWI-only positive portions. Chromaticity differences showed a marginally significant correlation with conspicuity scores between DWI-positive and double-positive portions. Monochromatic color combinations can facilitate the visual evaluation of FDG-uptake and diffusivity as well as registration accuracy on the FDG-PET/DW fusion images, when red- and green-colored elements are assigned to FDG-PET and DW images, respectively.

  10. Siting MSW landfill using weighted linear combination and analytical hierarchy process (AHP) methodology in GIS environment (case study: Karaj).

    PubMed

    Moeinaddini, Mazaher; Khorasani, Nematollah; Danehkar, Afshin; Darvishsefat, Ali Asghar; Zienalyan, Mehdi

    2010-05-01

    Selection of landfill site is a complex process and needs many diverse criteria. The purpose of this paper is to evaluate the suitability of the studied site as landfill for MSW in Karaj. Using weighted linear combination (WLC) method and spatial cluster analysis (SCA), suitable sites for allocation of landfill for a 20-year period were identified. For analyzing spatial auto-correlation of the land suitability map layer (LSML), Maron's I was used. Finally, using the analytical hierarchy process (AHP), the most preferred alternative for the landfill siting was identified. Main advantages of AHP are: relative ease of handling multiple criteria, easy to understand and effective handling of both qualitative and quantitative data. As a result, 6% of the study area is suitable for landfill siting and third alternative was identified as the most preferred for siting MSW landfill by AHP. The ranking of alternatives were obtained only by applying the WLC approach showed different results from the AHP. The WLC should be used only for the identification of alternatives and the AHP is used for prioritization. We suggest the employed procedure for other similar regions. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  11. Is the linear modeling technique good enough for optimal form design? A comparison of quantitative analysis models.

    PubMed

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  12. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    PubMed Central

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  13. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  14. The brain adjusts grip forces differently according to gravity and inertia: a parabolic flight experiment

    PubMed Central

    White, Olivier

    2015-01-01

    In everyday life, one of the most frequent activities involves accelerating and decelerating an object held in precision grip. In many contexts, humans scale and synchronize their grip force (GF), normal to the finger/object contact, in anticipation of the expected tangential load force (LF), resulting from the combination of the gravitational and the inertial forces. In many contexts, GF and LF are linearly coupled. A few studies have examined how we adjust the parameters–gain and offset–of this linear relationship. However, the question remains open as to how the brain adjusts GF regardless of whether LF is generated by different combinations of weight and inertia. Here, we designed conditions to generate equivalent magnitudes of LF by independently varying mass and movement frequency. In a control experiment, we directly manipulated gravity in parabolic flights, while other factors remained constant. We show with a simple computational approach that, to adjust GF, the brain is sensitive to how LFs are produced at the fingertips. This provides clear evidence that the analysis of the origin of LF is performed centrally, and not only at the periphery. PMID:25717293

  15. Geometric mean for subspace selection.

    PubMed

    Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J

    2009-02-01

    Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.

  16. [Study on the early detection of Sclerotinia of Brassica napus based on combinational-stimulated bands].

    PubMed

    Liu, Fei; Feng, Lei; Lou, Bing-gan; Sun, Guang-ming; Wang, Lian-ping; He, Yong

    2010-07-01

    The combinational-stimulated bands were used to develop linear and nonlinear calibrations for the early detection of sclerotinia of oilseed rape (Brassica napus L.). Eighty healthy and 100 Sclerotinia leaf samples were scanned, and different preprocessing methods combined with successive projections algorithm (SPA) were applied to develop partial least squares (PLS) discriminant models, multiple linear regression (MLR) and least squares-support vector machine (LS-SVM) models. The results indicated that the optimal full-spectrum PLS model was achieved by direct orthogonal signal correction (DOSC), then De-trending and Raw spectra with correct recognition ratio of 100%, 95.7% and 95.7%, respectively. When using combinational-stimulated bands, the optimal linear models were SPA-MLR (DOSC) and SPA-PLS (DOSC) with correct recognition ratio of 100%. All SPA-LSSVM models using DOSC, De-trending and Raw spectra achieved perfect results with recognition of 100%. The overall results demonstrated that it was feasible to use combinational-stimulated bands for the early detection of Sclerotinia of oilseed rape, and DOSC-SPA was a powerful way for informative wavelength selection. This method supplied a new approach to the early detection and portable monitoring instrument of sclerotinia.

  17. Variations of archived static-weight data and WIM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, C.J.; Gillmann, R.; Kent, P.M.

    1998-12-01

    Using seven-card archived, static-weight and weigh-in-motion (WIM), truck data received by FHWA for 1966--1992, the authors examine the fluctuations of four fiducial weight measures reported at weight sites in the 50 states. The reduced 172 MB Class 9 (332000) database was prepared and ordered from 2 CD-ROMS with duplicate records removed. Front-axle weight and gross-vehicle weight (GVW) are combined conceptually by determining the front axle weight in four-quartile GVW categories. The four categories of front axle weight from the four GVW categories are combined in four ways. Three linear combinations are with fixed-coefficient fiducials and one is that optimal linearmore » combination producing the smallest standard deviation to mean value ratio. The best combination gives coefficients of variation of 2--3% for samples of 100 trucks, below the expected accuracy of single-event WIM measurements. Time tracking of data shows some high-variation sites have seasonal variations, or linear variations over the time-ordered samples. Modeling of these effects is very site specific but provides a way to reduce high variations. Some automatic calibration schemes would erroneously remove such seasonal or linear variations were they static effects.« less

  18. Adaptive rival penalized competitive learning and combined linear predictor model for financial forecast and investment.

    PubMed

    Cheung, Y M; Leung, W M; Xu, L

    1997-01-01

    We propose a prediction model called Rival Penalized Competitive Learning (RPCL) and Combined Linear Predictor method (CLP), which involves a set of local linear predictors such that a prediction is made by the combination of some activated predictors through a gating network (Xu et al., 1994). Furthermore, we present its improved variant named Adaptive RPCL-CLP that includes an adaptive learning mechanism as well as a data pre-and-post processing scheme. We compare them with some existing models by demonstrating their performance on two real-world financial time series--a China stock price and an exchange-rate series of US Dollar (USD) versus Deutschmark (DEM). Experiments have shown that Adaptive RPCL-CLP not only outperforms the other approaches with the smallest prediction error and training costs, but also brings in considerable high profits in the trading simulation of foreign exchange market.

  19. Analysis of a Hybrid Wing Body Center Section Test Article

    NASA Technical Reports Server (NTRS)

    Wu, Hsi-Yung T.; Shaw, Peter; Przekop, Adam

    2013-01-01

    The hybrid wing body center section test article is an all-composite structure made of crown, floor, keel, bulkhead, and rib panels utilizing the Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) design concept. The primary goal of this test article is to prove that PRSEUS components are capable of carrying combined loads that are representative of a hybrid wing body pressure cabin design regime. This paper summarizes the analytical approach, analysis results, and failure predictions of the test article. A global finite element model of composite panels, metallic fittings, mechanical fasteners, and the Combined Loads Test System (COLTS) test fixture was used to conduct linear structural strength and stability analyses to validate the specimen under the most critical combination of bending and pressure loading conditions found in the hybrid wing body pressure cabin. Local detail analyses were also performed at locations with high stress concentrations, at Tee-cap noodle interfaces with surrounding laminates, and at fastener locations with high bearing/bypass loads. Failure predictions for different composite and metallic failure modes were made, and nonlinear analyses were also performed to study the structural response of the test article under combined bending and pressure loading. This large-scale specimen test will be conducted at the COLTS facility at the NASA Langley Research Center.

  20. Vienna Contribution to ITRF2014

    NASA Astrophysics Data System (ADS)

    Böhm, Sigrid; Krásná, Hana; Bachmann, Sabine

    2016-12-01

    The next realization of the International Terrestrial Reference System, the ITRF2014, was released in the beginning of 2016. The VLBI input to ITRF2014 was provided by the International VLBI Service for Geodesy and Astrometry (IVS) and consists of a combination of all Analysis Center contributions. One of these single solutions was contributed by the Vienna Special Analysis Center of the Department of Geodesy and Geoinformation at TU Wien. In this paper we describe the characteristics of the Vienna contribution (calculated using the Vienna VLBI Software VieVS) to ITRF2014 and VTRF2014, respectively. We give a documentation of the included sessions and stations as well as some statistical information which shows the performance of the Vienna contribution compared to the other contributions in the IVS combination. In addition to that, a single TRF solution, VieTRF2014a, which is based on the Vienna input to ITRF2014, is presented and compared to previous TRF solutions. By and large the Vienna contribution does not exhibit any outstanding features when compared to the other submissions, except for the Earth rotation component dUT1, which shows large residuals with respect to the combined solution. The reason for this discrepancy is probably the different parameterization of EOP in VieVS as piecewise linear offsets, necessitating a transformation prior to the combination.

Top