Sample records for linear combination technique

  1. Ranking Forestry Investments With Parametric Linear Programming

    Treesearch

    Paul A. Murphy

    1976-01-01

    Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.

  2. A Technique of Treating Negative Weights in WENO Schemes

    NASA Technical Reports Server (NTRS)

    Shi, Jing; Hu, Changqing; Shu, Chi-Wang

    2000-01-01

    High order accurate weighted essentially non-oscillatory (WENO) schemes have recently been developed for finite difference and finite volume methods both in structural and in unstructured meshes. A key idea in WENO scheme is a linear combination of lower order fluxes or reconstructions to obtain a high order approximation. The combination coefficients, also called linear weights, are determined by local geometry of the mesh and order of accuracy and may become negative. WENO procedures cannot be applied directly to obtain a stable scheme if negative linear weights are present. Previous strategy for handling this difficulty is by either regrouping of stencils or reducing the order of accuracy to get rid of the negative linear weights. In this paper we present a simple and effective technique for handling negative linear weights without a need to get rid of them.

  3. Evaluation of aircraft microwave data for locating zones for well stimulation and enhanced gas recovery. [Arkansas Arkoma Basin

    NASA Technical Reports Server (NTRS)

    Macdonald, H.; Waite, W.; Elachi, C.; Babcock, R.; Konig, R.; Gattis, J.; Borengasser, M.; Tolman, D.

    1980-01-01

    Imaging radar was evaluated as an adjunct to conventional petroleum exploration techniques, especially linear mapping. Linear features were mapped from several remote sensor data sources including stereo photography, enhanced LANDSAT imagery, SLAR radar imagery, enhanced SAR radar imagery, and SAR radar/LANDSAT combinations. Linear feature maps were compared with surface joint data, subsurface and geophysical data, and gas production in the Arkansas part of the Arkoma basin. The best LANDSAT enhanced product for linear detection was found to be a winter scene, band 7, uniform distribution stretch. Of the individual SAR data products, the VH (cross polarized) SAR radar mosaic provides for detection of most linears; however, none of the SAR enhancements is significantly better than the others. Radar/LANDSAT merges may provide better linear detection than a single sensor mapping mode, but because of operator variability, the results are inconclusive. Radar/LANDSAT combinations appear promising as an optimum linear mapping technique, if the advantages and disadvantages of each remote sensor are considered.

  4. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  5. A comparison of linear and nonlinear statistical techniques in performance attribution.

    PubMed

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  6. Improving biomedical information retrieval by linear combinations of different query expansion techniques.

    PubMed

    Abdulla, Ahmed AbdoAziz Ahmed; Lin, Hongfei; Xu, Bo; Banbhrani, Santosh Kumar

    2016-07-25

    Biomedical literature retrieval is becoming increasingly complex, and there is a fundamental need for advanced information retrieval systems. Information Retrieval (IR) programs scour unstructured materials such as text documents in large reserves of data that are usually stored on computers. IR is related to the representation, storage, and organization of information items, as well as to access. In IR one of the main problems is to determine which documents are relevant and which are not to the user's needs. Under the current regime, users cannot precisely construct queries in an accurate way to retrieve particular pieces of data from large reserves of data. Basic information retrieval systems are producing low-quality search results. In our proposed system for this paper we present a new technique to refine Information Retrieval searches to better represent the user's information need in order to enhance the performance of information retrieval by using different query expansion techniques and apply a linear combinations between them, where the combinations was linearly between two expansion results at one time. Query expansions expand the search query, for example, by finding synonyms and reweighting original terms. They provide significantly more focused, particularized search results than do basic search queries. The retrieval performance is measured by some variants of MAP (Mean Average Precision) and according to our experimental results, the combination of best results of query expansion is enhanced the retrieved documents and outperforms our baseline by 21.06 %, even it outperforms a previous study by 7.12 %. We propose several query expansion techniques and their combinations (linearly) to make user queries more cognizable to search engines and to produce higher-quality search results.

  7. Galerkin finite difference Laplacian operators on isolated unstructured triangular meshes by linear combinations

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.

    1990-01-01

    The Galerkin weighted residual technique using linear triangular weight functions is employed to develop finite difference formulae in Cartesian coordinates for the Laplacian operator on isolated unstructured triangular grids. The weighted residual coefficients associated with the weak formulation of the Laplacian operator along with linear combinations of the residual equations are used to develop the algorithm. The algorithm was tested for a wide variety of unstructured meshes and found to give satisfactory results.

  8. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    NASA Astrophysics Data System (ADS)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  9. Principal components colour display of ERTS imagery

    NASA Technical Reports Server (NTRS)

    Taylor, M. M.

    1974-01-01

    In the technique presented, colours are not derived from single bands, but rather from independent linear combinations of the bands. Using a simple model of the processing done by the visual system, three informationally independent linear combinations of the four ERTS bands are mapped onto the three visual colour dimensions of brightness, redness-greenness and blueness-yellowness. The technique permits user-specific transformations which enhance particular features, but this is not usually needed, since a single transformation provides a picture which conveys much of the information implicit in the ERTS data. Examples of experimental vector images with matched individual band images are shown.

  10. Analytical methods in multivariate highway safety exposure data estimation

    DOT National Transportation Integrated Search

    1984-01-01

    Three general analytical techniques which may be of use in : extending, enhancing, and combining highway accident exposure data are : discussed. The techniques are log-linear modelling, iterative propor : tional fitting and the expectation maximizati...

  11. Object matching using a locally affine invariant and linear programming techniques.

    PubMed

    Li, Hongsheng; Huang, Xiaolei; He, Lei

    2013-02-01

    In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.

  12. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    PubMed Central

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  13. Linearized image reconstruction method for ultrasound modulated electrical impedance tomography based on power density distribution

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2017-04-01

    Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.

  14. Profiling of barrier capacitance and spreading resistance using a transient linearly increasing voltage technique.

    PubMed

    Gaubas, E; Ceponis, T; Kusakovskij, J

    2011-08-01

    A technique for the combined measurement of barrier capacitance and spreading resistance profiles using a linearly increasing voltage pulse is presented. The technique is based on the measurement and analysis of current transients, due to the barrier and diffusion capacitance, and the spreading resistance, between a needle probe and sample. To control the impact of deep traps in the barrier capacitance, a steady state bias illumination with infrared light was employed. Measurements of the spreading resistance and barrier capacitance profiles using a stepwise positioned probe on cross sectioned silicon pin diodes and pnp structures are presented.

  15. Local numerical modelling of ultrasonic guided waves in linear and nonlinear media

    NASA Astrophysics Data System (ADS)

    Packo, Pawel; Radecki, Rafal; Kijanka, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz; Leamy, Michael J.

    2017-04-01

    Nonlinear ultrasonic techniques provide improved damage sensitivity compared to linear approaches. The combination of attractive properties of guided waves, such as Lamb waves, with unique features of higher harmonic generation provides great potential for characterization of incipient damage, particularly in plate-like structures. Nonlinear ultrasonic structural health monitoring techniques use interrogation signals at frequencies other than the excitation frequency to detect changes in structural integrity. Signal processing techniques used in non-destructive evaluation are frequently supported by modeling and numerical simulations in order to facilitate problem solution. This paper discusses known and newly-developed local computational strategies for simulating elastic waves, and attempts characterization of their numerical properties in the context of linear and nonlinear media. A hybrid numerical approach combining advantages of the Local Interaction Simulation Approach (LISA) and Cellular Automata for Elastodynamics (CAFE) is proposed for unique treatment of arbitrary strain-stress relations. The iteration equations of the method are derived directly from physical principles employing stress and displacement continuity, leading to an accurate description of the propagation in arbitrarily complex media. Numerical analysis of guided wave propagation, based on the newly developed hybrid approach, is presented and discussed in the paper for linear and nonlinear media. Comparisons to Finite Elements (FE) are also discussed.

  16. Novel hybrid linear stochastic with non-linear extreme learning machine methods for forecasting monthly rainfall a tropical climate.

    PubMed

    Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein

    2018-09-15

    A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2  = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Estimating forest attribute parameters for small areas using nearest neighbors techniques

    Treesearch

    Ronald E. McRoberts

    2012-01-01

    Nearest neighbors techniques have become extremely popular, particularly for use with forest inventory data. With these techniques, a population unit prediction is calculated as a linear combination of observations for a selected number of population units in a sample that are most similar, or nearest, in a space of ancillary variables to the population unit requiring...

  18. Comparison of acrylamide intake from Western and guideline based diets using probabilistic techniques and linear programming.

    PubMed

    Katz, Josh M; Winter, Carl K; Buttrey, Samuel E; Fadel, James G

    2012-03-01

    Western and guideline based diets were compared to determine if dietary improvements resulting from following dietary guidelines reduce acrylamide intake. Acrylamide forms in heat treated foods and is a human neurotoxin and animal carcinogen. Acrylamide intake from the Western diet was estimated with probabilistic techniques using teenage (13-19 years) National Health and Nutrition Examination Survey (NHANES) food consumption estimates combined with FDA data on the levels of acrylamide in a large number of foods. Guideline based diets were derived from NHANES data using linear programming techniques to comport to recommendations from the Dietary Guidelines for Americans, 2005. Whereas the guideline based diets were more properly balanced and rich in consumption of fruits, vegetables, and other dietary components than the Western diets, acrylamide intake (mean±SE) was significantly greater (P<0.001) from consumption of the guideline based diets (0.508±0.003 μg/kg/day) than from consumption of the Western diets (0.441±0.003 μg/kg/day). Guideline based diets contained less acrylamide contributed by French fries and potato chips than Western diets. Overall acrylamide intake, however, was higher in guideline based diets as a result of more frequent breakfast cereal intake. This is believed to be the first example of a risk assessment that combines probabilistic techniques with linear programming and results demonstrate that linear programming techniques can be used to model specific diets for the assessment of toxicological and nutritional dietary components. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Combining polarimetry and spectropolarimetry techniques in diagnostics of cancer changes in biological tissues

    NASA Astrophysics Data System (ADS)

    Yermolenko, Sergey; Ivashko, Pavlo; Gruia, Ion; Gruia, Maria; Peresunko, Olexander; Zelinska, Natalia; Voloshynskyi, Dmytro; Fedoruk, Olexander; Zimnyakov, Dmitry; Alonova, Marina

    2015-02-01

    The aim of the study is combining polarimetry and spectropolarimetry techniques for identifying the changes of opticalgeometrical structure in different kinds of biotissues with solid tumours. It is researched that a linear dichroism appears in biotissues (human esophagus, muscle tissue of rats, human prostate tissue, cervical smear) with cancer diseases, magnitude of which depends on the type of the tissue and on the time of cancer process development.

  20. Timber management planning with timber ram and goal programming

    Treesearch

    Richard C. Field

    1978-01-01

    By using goal programming to enhance the linear programming of Timber RAM, multiple decision criteria were incorporated in the timber management planning of a National Forest in the southeastern United States. Combining linear and goal programming capitalizes on the advantages of the two techniques and produces operationally feasible solutions. This enhancement may...

  1. Protein fold recognition using geometric kernel data fusion.

    PubMed

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  2. Linear combination reading program for capture gamma rays

    USGS Publications Warehouse

    Tanner, Allan B.

    1971-01-01

    This program computes a weighting function, Qj, which gives a scalar output value of unity when applied to the spectrum of a desired element and a minimum value (considering statistics) when applied to spectra of materials not containing the desired element. Intermediate values are obtained for materials containing the desired element, in proportion to the amount of the element they contain. The program is written in the BASIC language in a format specific to the Hewlett-Packard 2000A Time-Sharing System, and is an adaptation of an earlier program for linear combination reading for X-ray fluorescence analysis (Tanner and Brinkerhoff, 1971). Following the program is a sample run from a study of the application of the linear combination technique to capture-gamma-ray analysis for calcium (report in preparation).

  3. A Comparison of Multivariable Control Design Techniques for a Turbofan Engine Control

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Watts, Stephen R.

    1995-01-01

    This paper compares two previously published design procedures for two different multivariable control design techniques for application to a linear engine model of a jet engine. The two multivariable control design techniques compared were the Linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR) and the H-Infinity synthesis. The two control design techniques were used with specific previously published design procedures to synthesize controls which would provide equivalent closed loop frequency response for the primary control loops while assuring adequate loop decoupling. The resulting controllers were then reduced in order to minimize the programming and data storage requirements for a typical implementation. The reduced order linear controllers designed by each method were combined with the linear model of an advanced turbofan engine and the system performance was evaluated for the continuous linear system. Included in the performance analysis are the resulting frequency and transient responses as well as actuator usage and rate capability for each design method. The controls were also analyzed for robustness with respect to structured uncertainties in the unmodeled system dynamics. The two controls were then compared for performance capability and hardware implementation issues.

  4. Discriminant forest classification method and system

    DOEpatents

    Chen, Barry Y.; Hanley, William G.; Lemmond, Tracy D.; Hiller, Lawrence J.; Knapp, David A.; Mugge, Marshall J.

    2012-11-06

    A hybrid machine learning methodology and system for classification that combines classical random forest (RF) methodology with discriminant analysis (DA) techniques to provide enhanced classification capability. A DA technique which uses feature measurements of an object to predict its class membership, such as linear discriminant analysis (LDA) or Andersen-Bahadur linear discriminant technique (AB), is used to split the data at each node in each of its classification trees to train and grow the trees and the forest. When training is finished, a set of n DA-based decision trees of a discriminant forest is produced for use in predicting the classification of new samples of unknown class.

  5. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  6. A General Linear Model Approach to Adjusting the Cumulative GPA.

    ERIC Educational Resources Information Center

    Young, John W.

    A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…

  7. The measurement of the earth's radiation budget as a problem in information theory - A tool for the rational design of earth observing systems

    NASA Technical Reports Server (NTRS)

    Barkstrom, B. R.

    1983-01-01

    The measurement of the earth's radiation budget has been chosen to illustrate the technique of objective system design. The measurement process is an approximately linear transformation of the original field of radiant exitances, so that linear statistical techniques may be employed. The combination of variability, measurement strategy, and error propagation is presently made with the help of information theory, as suggested by Kondratyev et al. (1975) and Peckham (1974). Covariance matrices furnish the quantitative statement of field variability.

  8. Application of empirical mode decomposition with local linear quantile regression in financial time series forecasting.

    PubMed

    Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M

    2014-01-01

    This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.

  9. A linear circuit analysis program with stiff systems capability

    NASA Technical Reports Server (NTRS)

    Cook, C. H.; Bavuso, S. J.

    1973-01-01

    Several existing network analysis programs have been modified and combined to employ a variable topological approach to circuit translation. Efficient numerical integration techniques are used for transient analysis.

  10. Proper orthogonal decomposition-based spectral higher-order stochastic estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au; Tinney, Charles E.

    A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimationmore » (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.« less

  11. Burn-injured tissue detection for debridement surgery through the combination of non-invasive optical imaging techniques.

    PubMed

    Heredia-Juesas, Juan; Thatcher, Jeffrey E; Lu, Yang; Squiers, John J; King, Darlene; Fan, Wensheng; DiMaio, J Michael; Martinez-Lorenzo, Jose A

    2018-04-01

    The process of burn debridement is a challenging technique requiring significant skills to identify the regions that need excision and their appropriate excision depths. In order to assist surgeons, a machine learning tool is being developed to provide a quantitative assessment of burn-injured tissue. This paper presents three non-invasive optical imaging techniques capable of distinguishing four kinds of tissue-healthy skin, viable wound bed, shallow burn, and deep burn-during serial burn debridement in a porcine model. All combinations of these three techniques have been studied through a k-fold cross-validation method. In terms of global performance, the combination of all three techniques significantly improves the classification accuracy with respect to just one technique, from 0.42 up to more than 0.76. Furthermore, a non-linear spatial filtering based on the mode of a small neighborhood has been applied as a post-processing technique, in order to improve the performance of the classification. Using this technique, the global accuracy reaches a value close to 0.78 and, for some particular tissues and combination of techniques, the accuracy improves by 13%.

  12. The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models

    DTIC Science & Technology

    1988-07-27

    auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the

  13. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  14. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  15. Polarization and Color Filtering Applied to Enhance Photogrammetric Measurements of Reflective Surfaces

    NASA Technical Reports Server (NTRS)

    Wells, Jeffrey M.; Jones, Thomas W.; Danehy, Paul M.

    2005-01-01

    Techniques for enhancing photogrammetric measurement of reflective surfaces by reducing noise were developed utilizing principles of light polarization. Signal selectivity with polarized light was also compared to signal selectivity using chromatic filters. Combining principles of linear cross polarization and color selectivity enhanced signal-to-noise ratios by as much as 800 fold. More typical improvements with combining polarization and color selectivity were about 100 fold. We review polarization-based techniques and present experimental results comparing the performance of traditional retroreflective targeting materials, cornercube targets returning depolarized light, and color selectivity.

  16. Weighted hybrid technique for recommender system

    NASA Astrophysics Data System (ADS)

    Suriati, S.; Dwiastuti, Meisyarah; Tulus, T.

    2017-12-01

    Recommender system becomes very popular and has important role in an information system or webpages nowadays. A recommender system tries to make a prediction of which item a user may like based on his activity on the system. There are some familiar techniques to build a recommender system, such as content-based filtering and collaborative filtering. Content-based filtering does not involve opinions from human to make the prediction, while collaborative filtering does, so collaborative filtering can predict more accurately. However, collaborative filtering cannot give prediction to items which have never been rated by any user. In order to cover the drawbacks of each approach with the advantages of other approach, both approaches can be combined with an approach known as hybrid technique. Hybrid technique used in this work is weighted technique in which the prediction score is combination linear of scores gained by techniques that are combined.The purpose of this work is to show how an approach of weighted hybrid technique combining content-based filtering and item-based collaborative filtering can work in a movie recommender system and to show the performance comparison when both approachare combined and when each approach works alone. There are three experiments done in this work, combining both techniques with different parameters. The result shows that the weighted hybrid technique that is done in this work does not really boost the performance up, but it helps to give prediction score for unrated movies that are impossible to be recommended by only using collaborative filtering.

  17. Quantitative structure-activity relationship study of P2X7 receptor inhibitors using combination of principal component analysis and artificial intelligence methods.

    PubMed

    Ahmadi, Mehdi; Shahlaei, Mohsen

    2015-01-01

    P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure-activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7-7-1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure-activity relationship model suggested is robust and satisfactory.

  18. Quantitative structure–activity relationship study of P2X7 receptor inhibitors using combination of principal component analysis and artificial intelligence methods

    PubMed Central

    Ahmadi, Mehdi; Shahlaei, Mohsen

    2015-01-01

    P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure–activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7−7−1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure–activity relationship model suggested is robust and satisfactory. PMID:26600858

  19. [Relation between Body Height and Combined Length of Manubrium and Mesosternum of Sternum Measured by CT-VRT in Southwest Han Population].

    PubMed

    Luo, Ying-zhen; Tu, Meng; Fan, Fei; Zheng, Jie-qian; Yang, Ming; Li, Tao; Zhang, Kui; Deng, Zhen-hua

    2015-06-01

    To establish the linear regression equation between body height and combined length of manubrium and mesostenum of sternum measured by CT volume rendering technique (CT-VRT) in southwest Han population. One hundred and sixty subjects, including 80 males and 80 females were selected from southwest Han population for routine CT-VRT (reconstruction thickness 1 mm) examination. The lengths of both manubrium and mesosternum were recorded, and the combined length of manubrium and mesosternum was equal to the algebraic sum of them. The sex-specific linear regression equations between the combined length of manubrium and mesosternum and the real body height of each subject were deduced. The sex-specific simple linear regression equations between the combined length of manubrium and mesostenum (x3) and body height (y) were established (male: y = 135.000+2.118 x3 and female: y = 120.790+2.808 x3). Both equations showed statistical significance (P < 0.05) with a 100% predictive accuracy. CT-VRT is an effective method for measurement of the index of sternum. The combined length of manubrium and mesosternum from CT-VRT can be used for body height estimation in southwest Han population.

  20. An Approach for Automatic Generation of Adaptive Hypermedia in Education with Multilingual Knowledge Discovery Techniques

    ERIC Educational Resources Information Center

    Alfonseca, Enrique; Rodriguez, Pilar; Perez, Diana

    2007-01-01

    This work describes a framework that combines techniques from Adaptive Hypermedia and Natural Language processing in order to create, in a fully automated way, on-line information systems from linear texts in electronic format, such as textbooks. The process is divided into two steps: an "off-line" processing step, which analyses the source text,…

  1. Combining angular differential imaging and accurate polarimetry with SPHERE/IRDIS to characterize young giant exoplanets

    NASA Astrophysics Data System (ADS)

    van Holstein, Rob G.; Snik, Frans; Girard, Julien H.; de Boer, Jozua; Ginski, C.; Keller, Christoph U.; Stam, Daphne M.; Beuzit, Jean-Luc; Mouillet, David; Kasper, Markus; Langlois, Maud; Zurlo, Alice; de Kok, Remco J.; Vigan, Arthur

    2017-09-01

    Young giant exoplanets emit infrared radiation that can be linearly polarized up to several percent. This linear polarization can trace: 1) the presence of atmospheric cloud and haze layers, 2) spatial structure, e.g. cloud bands and rotational flattening, 3) the spin axis orientation and 4) particle sizes and cloud top pressure. We introduce a novel high-contrast imaging scheme that combines angular differential imaging (ADI) and accurate near-infrared polarimetry to characterize self-luminous giant exoplanets. We implemented this technique at VLT/SPHEREIRDIS and developed the corresponding observing strategies, the polarization calibration and the data-reduction approaches. The combination of ADI and polarimetry is challenging, because the field rotation required for ADI negatively affects the polarimetric performance. By combining ADI and polarimetry we can characterize planets that can be directly imaged with a very high signal-to-noise ratio. We use the IRDIS pupil-tracking mode and combine ADI and principal component analysis to reduce speckle noise. We take advantage of IRDIS' dual-beam polarimetric mode to eliminate differential effects that severely limit the polarimetric sensitivity (flat-fielding errors, differential aberrations and seeing), and thus further suppress speckle noise. To correct for instrumental polarization effects, we apply a detailed Mueller matrix model that describes the telescope and instrument and that has an absolute polarimetric accuracy <= 0.1%. Using this technique we have observed the planets of HR 8799 and the (sub-stellar) companion PZ Tel B. Unfortunately, we do not detect a polarization signal in a first analysis. We estimate preliminary 1σ upper limits on the degree of linear polarization of ˜ 1% and ˜ 0.1% for the planets of HR 8799 and PZ Tel B, respectively. The achieved sub-percent sensitivity and accuracy show that our technique has great promise for characterizing exoplanets through direct-imaging polarimetry

  2. Numerical methods in Markov chain modeling

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef; Stewart, William J.

    1989-01-01

    Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.

  3. Remote detection of electronic devices

    DOEpatents

    Judd, Stephen L [Los Alamos, NM; Fortgang, Clifford M [Los Alamos, NM; Guenther, David C [Los Alamos, NM

    2012-09-25

    An apparatus and method for detecting solid-state electronic devices are described. Non-linear junction detection techniques are combined with spread-spectrum encoding and cross correlation to increase the range and sensitivity of the non-linear junction detection and to permit the determination of the distances of the detected electronics. Nonlinear elements are detected by transmitting a signal at a chosen frequency and detecting higher harmonic signals that are returned from responding devices.

  4. Fiber-based Coherent Lidar for Target Ranging, Velocimetry, and Atmospheric Wind Sensing

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Pierrottet, Diego

    2006-01-01

    By employing a combination of optical heterodyne and linear frequency modulation techniques and utilizing state-of-the-art fiber optic technologies, highly efficient, compact and reliable lidar suitable for operation in a space environment is being developed.

  5. Application of Mathematical Signal Processing Techniques to Mission Systems. (l’Application des techniques mathematiques du traitement du signal aux systemes de conduite des missions)

    DTIC Science & Technology

    1999-11-01

    represents the linear time invariant (LTI) response of the combined analysis /synthesis system while the second repre- sents the aliasing introduced into...effectively to implement voice scrambling systems based on time - frequency permutation . The most general form of such a system is shown in Fig. 22 where...92201 NEUILLY-SUR-SEINE CEDEX, FRANCE RTO LECTURE SERIES 216 Application of Mathematical Signal Processing Techniques to Mission Systems (1

  6. Fast correction approach for wavefront sensorless adaptive optics based on a linear phase diversity technique.

    PubMed

    Yue, Dan; Nie, Haitao; Li, Ye; Ying, Changsheng

    2018-03-01

    Wavefront sensorless (WFSless) adaptive optics (AO) systems have been widely studied in recent years. To reach optimum results, such systems require an efficient correction method. This paper presents a fast wavefront correction approach for a WFSless AO system mainly based on the linear phase diversity (PD) technique. The fast closed-loop control algorithm is set up based on the linear relationship between the drive voltage of the deformable mirror (DM) and the far-field images of the system, which is obtained through the linear PD algorithm combined with the influence function of the DM. A large number of phase screens under different turbulence strengths are simulated to test the performance of the proposed method. The numerical simulation results show that the method has fast convergence rate and strong correction ability, a few correction times can achieve good correction results, and can effectively improve the imaging quality of the system while needing fewer measurements of CCD data.

  7. Autoregressive linear least square single scanning electron microscope image signal-to-noise ratio estimation.

    PubMed

    Sim, Kok Swee; NorHisham, Syafiq

    2016-11-01

    A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  8. Wavelet packets for multi- and hyper-spectral imagery

    NASA Astrophysics Data System (ADS)

    Benedetto, J. J.; Czaja, W.; Ehler, M.; Flake, C.; Hirn, M.

    2010-01-01

    State of the art dimension reduction and classification schemes in multi- and hyper-spectral imaging rely primarily on the information contained in the spectral component. To better capture the joint spatial and spectral data distribution we combine the Wavelet Packet Transform with the linear dimension reduction method of Principal Component Analysis. Each spectral band is decomposed by means of the Wavelet Packet Transform and we consider a joint entropy across all the spectral bands as a tool to exploit the spatial information. Dimension reduction is then applied to the Wavelet Packets coefficients. We present examples of this technique for hyper-spectral satellite imaging. We also investigate the role of various shrinkage techniques to model non-linearity in our approach.

  9. Comparison of different modelling approaches of drive train temperature for the purposes of wind turbine failure detection

    NASA Astrophysics Data System (ADS)

    Tautz-Weinert, J.; Watson, S. J.

    2016-09-01

    Effective condition monitoring techniques for wind turbines are needed to improve maintenance processes and reduce operational costs. Normal behaviour modelling of temperatures with information from other sensors can help to detect wear processes in drive trains. In a case study, modelling of bearing and generator temperatures is investigated with operational data from the SCADA systems of more than 100 turbines. The focus is here on automated training and testing on a farm level to enable an on-line system, which will detect failures without human interpretation. Modelling based on linear combinations, artificial neural networks, adaptive neuro-fuzzy inference systems, support vector machines and Gaussian process regression is compared. The selection of suitable modelling inputs is discussed with cross-correlation analyses and a sensitivity study, which reveals that the investigated modelling techniques react in different ways to an increased number of inputs. The case study highlights advantages of modelling with linear combinations and artificial neural networks in a feedforward configuration.

  10. Remote sensing and GIS-based prediction and assessment of copper-gold resources in Thailand

    NASA Astrophysics Data System (ADS)

    Yang, Shasha; Wang, Gongwen; Du, Wenhui; Huang, Luxiong

    2014-03-01

    Quantitative integration of geological information is a frontier and hotspot of prospecting decision research in the world. The forming process of large scale Cu-Au deposits is influenced by complicated geological events and restricted by various geological factors (stratum, structure and alteration). In this paper, using Thailand's copper-gold deposit district as a case study, geological anomaly theory is used along with the typical copper and gold metallogenic model, ETM+ remote sensing images, geological maps and mineral geology database in study area are combined with GIS technique. These techniques create ore-forming information such as geological information (strata, line-ring faults, intrusion), remote sensing information (hydroxyl alteration, iron alteration, linear-ring structure) and the Cu-Au prospect targets. These targets were identified using weights of evidence model. The research results show that the remote sensing and geological data can be combined to quickly predict and assess for exploration of mineral resources in a regional metallogenic belt.

  11. Prediction of monthly rainfall in Victoria, Australia: Clusterwise linear regression approach

    NASA Astrophysics Data System (ADS)

    Bagirov, Adil M.; Mahmood, Arshad; Barton, Andrew

    2017-05-01

    This paper develops the Clusterwise Linear Regression (CLR) technique for prediction of monthly rainfall. The CLR is a combination of clustering and regression techniques. It is formulated as an optimization problem and an incremental algorithm is designed to solve it. The algorithm is applied to predict monthly rainfall in Victoria, Australia using rainfall data with five input meteorological variables over the period of 1889-2014 from eight geographically diverse weather stations. The prediction performance of the CLR method is evaluated by comparing observed and predicted rainfall values using four measures of forecast accuracy. The proposed method is also compared with the CLR using the maximum likelihood framework by the expectation-maximization algorithm, multiple linear regression, artificial neural networks and the support vector machines for regression models using computational results. The results demonstrate that the proposed algorithm outperforms other methods in most locations.

  12. Is the linear modeling technique good enough for optimal form design? A comparison of quantitative analysis models.

    PubMed

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  13. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    PubMed Central

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  14. Combined slope ratio analysis and linear-subtraction: An extension of the Pearce ratio method

    NASA Astrophysics Data System (ADS)

    De Waal, Sybrand A.

    1996-07-01

    A new technique, called combined slope ratio analysis, has been developed by extending the Pearce element ratio or conserved-denominator method (Pearce, 1968) to its logical conclusions. If two stoichiometric substances are mixed and certain chemical components are uniquely contained in either one of the two mixing substances, then by treating these unique components as conserved, the composition of the substance not containing the relevant component can be accurately calculated within the limits allowed by analytical and geological error. The calculated composition can then be subjected to rigorous statistical testing using the linear-subtraction method recently advanced by Woronow (1994). Application of combined slope ratio analysis to the rocks of the Uwekahuna Laccolith, Hawaii, USA, and the lavas of the 1959-summit eruption of Kilauea Volcano, Hawaii, USA, yields results that are consistent with field observations.

  15. Identity method for particle number fluctuations and correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorenstein, M. I.

    An incomplete particle identification distorts the observed event-by-event fluctuations of the hadron chemical composition in nucleus-nucleus collisions. A new experimental technique called the identity method was recently proposed. It eliminated the misidentification problem for one specific combination of the second moments in a system of two hadron species. In the present paper, this method is extended to calculate all the second moments in a system with an arbitrary number of hadron species. Special linear combinations of the second moments are introduced. These combinations are presented in terms of single-particle variables and can be found experimentally from the event-by-event averaging. Themore » mathematical problem is then reduced to solving a system of linear equations. The effect of incomplete particle identification is fully eliminated from the final results.« less

  16. Development of a computer technique for the prediction of transport aircraft flight profile sonic boom signatures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Coen, Peter G.

    1991-01-01

    A new computer technique for the analysis of transport aircraft sonic boom signature characteristics was developed. This new technique, based on linear theory methods, combines the previously separate equivalent area and F function development with a signature propagation method using a single geometry description. The new technique was implemented in a stand-alone computer program and was incorporated into an aircraft performance analysis program. Through these implementations, both configuration designers and performance analysts are given new capabilities to rapidly analyze an aircraft's sonic boom characteristics throughout the flight envelope.

  17. Multiscale morphological filtering for analysis of noisy and complex images

    NASA Astrophysics Data System (ADS)

    Kher, A.; Mitra, S.

    Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.

  18. Multiscale Morphological Filtering for Analysis of Noisy and Complex Images

    NASA Technical Reports Server (NTRS)

    Kher, A.; Mitra, S.

    1993-01-01

    Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.

  19. Linear-sweep voltammetry of a soluble redox couple in a cylindrical electrode

    NASA Technical Reports Server (NTRS)

    Weidner, John W.

    1991-01-01

    An approach is described for using the linear sweep voltammetry (LSV) technique to study the kinetics of flooded porous electrodes by assuming a porous electrode as a collection of identical noninterconnected cylindrical pores that are filled with electrolyte. This assumption makes possible to study the behavior of this ideal electrode as that of a single pore. Alternatively, for an electrode of a given pore-size distribution, it is possible to predict the performance of different pore sizes and then combine the performance values.

  20. An independent Cepheid distance scale: Current status

    NASA Technical Reports Server (NTRS)

    Barnes, T. G., III

    1980-01-01

    An independent distance scale for Cepheid variables is discussed. The apparent magnitude and the visual surface brightness, inferred from an appropriate color index, are used to determine the angular diameter variation of the Cepheid. When combined with the linear displacement curve obtained from the integrated radial velocity curve, the distance and linear radius are determined. The attractiveness of the method is its complete independence of all other stellar distance scales, even though a number of practical difficulties currently exist in implementing the technique.

  1. A computational study on convolutional feature combination strategies for grade classification in colon cancer using fluorescence microscopy data

    NASA Astrophysics Data System (ADS)

    Chowdhury, Aritra; Sevinsky, Christopher J.; Santamaria-Pang, Alberto; Yener, Bülent

    2017-03-01

    The cancer diagnostic workflow is typically performed by highly specialized and trained pathologists, for which analysis is expensive both in terms of time and money. This work focuses on grade classification in colon cancer. The analysis is performed over 3 protein markers; namely E-cadherin, beta actin and colagenIV. In addition, we also use a virtual Hematoxylin and Eosin (HE) stain. This study involves a comparison of various ways in which we can manipulate the information over the 4 different images of the tissue samples and come up with a coherent and unified response based on the data at our disposal. Pre- trained convolutional neural networks (CNNs) is the method of choice for feature extraction. The AlexNet architecture trained on the ImageNet database is used for this purpose. We extract a 4096 dimensional feature vector corresponding to the 6th layer in the network. Linear SVM is used to classify the data. The information from the 4 different images pertaining to a particular tissue sample; are combined using the following techniques: soft voting, hard voting, multiplication, addition, linear combination, concatenation and multi-channel feature extraction. We observe that we obtain better results in general than when we use a linear combination of the feature representations. We use 5-fold cross validation to perform the experiments. The best results are obtained when the various features are linearly combined together resulting in a mean accuracy of 91.27%.

  2. Technique and outcomes of laparoscopic-combined linear stapler and hand-sutured side-to-side esophagojejunostomy with Roux-en-Y reconstruction as a treatment modality in patients undergoing proximal gastrectomy for benign and malignant disease of the gastroesophageal junction.

    PubMed

    Esquivel, Carlos M; Ampudia, Carolina; Fridman, Abraham; Moon, Rena; Szomstein, Samuel; Rosenthal, Raul J

    2014-02-01

    Circular stapler and hand-sutured esophagojejunostomy has been the most popular technique utilized in patients undergoing proximal gastrectomy through Roux-en-Y reconstruction for disease processes of the gastroesophageal junction. In recent years, with the advent of laparoscopic bariatric surgical techniques and refined linear stapler cutters, surgeons have developed the linear stapler side-to-side technique as a valid option. The aim of this study is to describe our technique and review the outcomes using the Roux-en-Y reconstruction with linear staplers after laparoscopic proximal gastrectomy for malignant and benign disease. After Internal Review Board approval and with adherence to the Health Insurance Portability and Accountability Act guidelines, a retrospective review of a prospectively collected database was conducted. A total of 14 patients underwent proximal laparoscopic gastric resection at our institution during a 3-year period from January 2008 to January 2011. Sex, body mass index, prior surgeries, complications of the prior surgery, intraoperative complications, pathologic findings, postoperative complications, hospital stay, and outpatient follow-up were measured in the preoperative and postoperative period. Our patient population consisted of 9 women and 5 men, with a mean age and body mass index of 45.42 years and 35.64 kg/m, respectively. Indications for proximal gastrectomy was in 4 patients a leak at the angle of His secondary to sleeve gastrectomy for morbid obesity, 1 patient was a stricture after a vertical banded gastroplasty, 1 patient a revision of a eroded gastric band, 1 patient a revision of a eroded mesh secondary to a hiatal hernia repair, 1 patient a conversion of a failed Nissen, 3 patients had a total gastrectomy due to a stage 2 gastric cancer, and 1 patient a gastrointestinal stromal tumor. There were no intraoperative complications. All the procedures were completed laparoscopically. The mean operative time was 137.16 minutes. The mean hospital stay was 7.6 days. One patient had a postoperative stricture at the esophagojejunal anastomosis that required multiple dilatations. All patients with gastric cancer are free of tumor recurrence. The use of a laparoscopic proximal gastrectomy with Roux-en-Y reconstruction through combined side-to-side linear stapler and hand-sewn esophagojejunal anastomosis seems to be a feasible and safe approach.

  3. Theoretical explanation of the polarization-converting system achieved by beam shaping and combination technique and its performance under high power conditions

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Li, Xiao; Shang, YaPing; Xu, XiaoJun

    2015-10-01

    The fiber laser has very obvious advantages and broad applications in remote welding, 3D cutting and national defense compared with the traditional solid laser. But influenced by heat effect of gain medium, nonlinear effect, stress birefringence effect and other negative factors, it's very difficult to get high power linearly polarized laser just using a single laser. For these limitations a polarization-converting system is designed using beam shaping and combination technique which is able to transform naturally polarized laser to linearly polarized laser at real time to resolve difficulties of generating high-power linearly polarized laser from fiber lasers in this paper. The principle of the Gaussian beam changing into the hollow beam passing through two axicons and the combination of the Gaussian beam and the hollow beam is discussed. In the experimental verification the energy conversion efficiency reached 93.1% with a remarkable enhancement of the extinction ratio from 3% to 98% benefited from the high conversion efficiency of axicons and the system worked fine under high power conditions. The system also kept excellent far field divergence. The experiment phenomenon also agreed with the simulation quite well. The experiment proves that this polarization-converting system will not affect laser structure which controls easily and needs no feedback and controlling system with stable and reliable properties at the same time. It can absolutely be applied to the polarization-conversion of high power laser.

  4. Calculative techniques for transonic flows about certain classes of wing body combinations

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Spreiter, J. R.

    1972-01-01

    Procedures based on the method of local linearization and transonic equivalence rule were developed for predicting properties of transonic flows about certain classes of wing-body combinations. The procedures are applicable to transonic flows with free stream Mach number in the ranges near one, below the lower critical and above the upper critical. Theoretical results are presented for surface and flow field pressure distributions for both lifting and nonlifting situations.

  5. Application of machine learning techniques to analyse the effects of physical exercise in ventricular fibrillation.

    PubMed

    Caravaca, Juan; Soria-Olivas, Emilio; Bataller, Manuel; Serrano, Antonio J; Such-Miquel, Luis; Vila-Francés, Joan; Guerrero, Juan F

    2014-02-01

    This work presents the application of machine learning techniques to analyse the influence of physical exercise in the physiological properties of the heart, during ventricular fibrillation. To this end, different kinds of classifiers (linear and neural models) are used to classify between trained and sedentary rabbit hearts. The use of those classifiers in combination with a wrapper feature selection algorithm allows to extract knowledge about the most relevant features in the problem. The obtained results show that neural models outperform linear classifiers (better performance indices and a better dimensionality reduction). The most relevant features to describe the benefits of physical exercise are those related to myocardial heterogeneity, mean activation rate and activation complexity. © 2013 Published by Elsevier Ltd.

  6. A New Stochastic Equivalent Linearization Implementation for Prediction of Geometrically Nonlinear Vibrations

    NASA Technical Reports Server (NTRS)

    Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.

    1999-01-01

    In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.

  7. Performance of signal-to-noise ratio estimation for scanning electron microscope using autocorrelation Levinson-Durbin recursion model.

    PubMed

    Sim, K S; Lim, M S; Yeap, Z X

    2016-07-01

    A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  8. Amplitude Frequency Response Measurement: A Simple Technique

    ERIC Educational Resources Information Center

    Satish, L.; Vora, S. C.

    2010-01-01

    A simple method is described to combine a modern function generator and a digital oscilloscope to configure a setup that can directly measure the amplitude frequency response of a system. This is achieved by synchronously triggering both instruments, with the function generator operated in the "Linear-Sweep" frequency mode, while the oscilloscope…

  9. Plant cell wall characterization using scanning probe microscopy techniques

    PubMed Central

    Yarbrough, John M; Himmel, Michael E; Ding, Shi-You

    2009-01-01

    Lignocellulosic biomass is today considered a promising renewable resource for bioenergy production. A combined chemical and biological process is currently under consideration for the conversion of polysaccharides from plant cell wall materials, mainly cellulose and hemicelluloses, to simple sugars that can be fermented to biofuels. Native plant cellulose forms nanometer-scale microfibrils that are embedded in a polymeric network of hemicelluloses, pectins, and lignins; this explains, in part, the recalcitrance of biomass to deconstruction. The chemical and structural characteristics of these plant cell wall constituents remain largely unknown today. Scanning probe microscopy techniques, particularly atomic force microscopy and its application in characterizing plant cell wall structure, are reviewed here. We also further discuss future developments based on scanning probe microscopy techniques that combine linear and nonlinear optical techniques to characterize plant cell wall nanometer-scale structures, specifically apertureless near-field scanning optical microscopy and coherent anti-Stokes Raman scattering microscopy. PMID:19703302

  10. Using Chaotic System in Encryption

    NASA Astrophysics Data System (ADS)

    Findik, Oğuz; Kahramanli, Şirzat

    In this paper chaotic systems and RSA encryption algorithm are combined in order to develop an encryption algorithm which accomplishes the modern standards. E.Lorenz's weather forecast' equations which are used to simulate non-linear systems are utilized to create chaotic map. This equation can be used to generate random numbers. In order to achieve up-to-date standards and use online and offline status, a new encryption technique that combines chaotic systems and RSA encryption algorithm has been developed. The combination of RSA algorithm and chaotic systems makes encryption system.

  11. A feasibility study for compressed sensing combined phase contrast MR angiography reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo; Han, Bong-Soo

    2012-02-01

    Phase contrast magnetic resonance angiography (PC MRA) is a technique for flow velocity measurement and vessels visualization, simultaneously. The PC MRA takes long scan time because each flow encoding gradients which are composed bipolar gradient type need to reconstruct the angiography image. Moreover, it takes more image acquisition time when we use the PC MRA at the low-tesla MRI system. In this study, we studied and evaluation of feasibility for CS MRI reconstruction combined PC MRA which data acquired by low-tesla MRI system. We used non-linear reconstruction algorithm which named Bregman iteration for CS image reconstruction and validate the usefulness of CS combined PC MRA reconstruction technique. The results of CS reconstructed PC MRA images provide similar level of image quality between fully sampled reconstruction data and sparse sampled reconstruction using CS technique. Although our results used half of sampling ratio and do not used specification hardware device or performance which are improving the temporal resolution of MR image acquisition such as parallel imaging reconstruction using phased array coil or non-cartesian trajectory, we think that CS combined PC MRA technique will be helpful to increase the temporal resolution and at low-tesla MRI system.

  12. Gynecomastia: glandular-liposculpture through a single transaxillary one hole incision.

    PubMed

    Lee, Yung Ki; Lee, Jun Hee; Kang, Sang Yoon

    2018-04-01

    Gynecomastia is characterized by the benign proliferation of breast tissue in men. Herein, we present a new method for the treatment of gynecomastia, using ultrasound-assisted liposuction with both conventional and reverse-cutting edge tip cannulas in combination with a pull-through lipectomy technique with pituitary forceps through a single transaxillary incision. Thirty patients were treated with this technique at the author's institution from January 2010 to January 2015. Ten patients were treated with conventional surgical excision of the glandular/fibrous breast tissue combined with liposuction through a periareolar incision before January 2010. Medical records, clinical photographs and linear analog scale scores were analyzed to compare the surgical results and complications. The patients were required to rate their cosmetic outcomes based on the linear analog scale with which they rated their own surgical results; the mean overall average score indicated a good or high level of satisfaction. There were no incidences of skin necrosis, hematoma, infection and scar contracture; however, one case each of seroma and nipple inversion did occur. Operative time was reduced overall using the new technique since it is relatively simple and straightforward. According to the evaluation by the four independent researchers, the patients treated with this new technique showed statistically significant improvements in scar and nipple-areolar complex (NAC) deformity compared to those who were treated using the conventional method. Glandular liposculpture through a single transaxillary incision is an efficient and safe technique that can provide aesthetically satisfying and consistent results.

  13. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  14. Linear Estimation of Particle Bulk Parameters from Multi-Wavelength Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Veselovskii, Igor; Dubovik, Oleg; Kolgotin, A.; Korenskiy, M.; Whiteman, D. N.; Allakhverdiev, K.; Huseyinoglu, F.

    2012-01-01

    An algorithm for linear estimation of aerosol bulk properties such as particle volume, effective radius and complex refractive index from multiwavelength lidar measurements is presented. The approach uses the fact that the total aerosol concentration can well be approximated as a linear combination of aerosol characteristics measured by multiwavelength lidar. Therefore, the aerosol concentration can be estimated from lidar measurements without the need to derive the size distribution, which entails more sophisticated procedures. The definition of the coefficients required for the linear estimates is based on an expansion of the particle size distribution in terms of the measurement kernels. Once the coefficients are established, the approach permits fast retrieval of aerosol bulk properties when compared with the full regularization technique. In addition, the straightforward estimation of bulk properties stabilizes the inversion making it more resistant to noise in the optical data. Numerical tests demonstrate that for data sets containing three aerosol backscattering and two extinction coefficients (so called 3 + 2 ) the uncertainties in the retrieval of particle volume and surface area are below 45% when input data random uncertainties are below 20 %. Moreover, using linear estimates allows reliable retrievals even when the number of input data is reduced. To evaluate the approach, the results obtained using this technique are compared with those based on the previously developed full inversion scheme that relies on the regularization procedure. Both techniques were applied to the data measured by multiwavelength lidar at NASA/GSFC. The results obtained with both methods using the same observations are in good agreement. At the same time, the high speed of the retrieval using linear estimates makes the method preferable for generating aerosol information from extended lidar observations. To demonstrate the efficiency of the method, an extended time series of observations acquired in Turkey in May 2010 was processed using the linear estimates technique permitting, for what we believe to be the first time, temporal-height distributions of particle parameters.

  15. Machine learning-based methods for prediction of linear B-cell epitopes.

    PubMed

    Wang, Hsin-Wei; Pai, Tun-Wen

    2014-01-01

    B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.

  16. Application of Multiregressive Linear Models, Dynamic Kriging Models and Neural Network Models to Predictive Maintenance of Hydroelectric Power Systems

    NASA Astrophysics Data System (ADS)

    Lucifredi, A.; Mazzieri, C.; Rossi, M.

    2000-05-01

    Since the operational conditions of a hydroelectric unit can vary within a wide range, the monitoring system must be able to distinguish between the variations of the monitored variable caused by variations of the operation conditions and those due to arising and progressing of failures and misoperations. The paper aims to identify the best technique to be adopted for the monitoring system. Three different methods have been implemented and compared. Two of them use statistical techniques: the first, the linear multiple regression, expresses the monitored variable as a linear function of the process parameters (independent variables), while the second, the dynamic kriging technique, is a modified technique of multiple linear regression representing the monitored variable as a linear combination of the process variables in such a way as to minimize the variance of the estimate error. The third is based on neural networks. Tests have shown that the monitoring system based on the kriging technique is not affected by some problems common to the other two models e.g. the requirement of a large amount of data for their tuning, both for training the neural network and defining the optimum plane for the multiple regression, not only in the system starting phase but also after a trivial operation of maintenance involving the substitution of machinery components having a direct impact on the observed variable. Or, in addition, the necessity of different models to describe in a satisfactory way the different ranges of operation of the plant. The monitoring system based on the kriging statistical technique overrides the previous difficulties: it does not require a large amount of data to be tuned and is immediately operational: given two points, the third can be immediately estimated; in addition the model follows the system without adapting itself to it. The results of the experimentation performed seem to indicate that a model based on a neural network or on a linear multiple regression is not optimal, and that a different approach is necessary to reduce the amount of work during the learning phase using, when available, all the information stored during the initial phase of the plant to build the reference baseline, elaborating, if it is the case, the raw information available. A mixed approach using the kriging statistical technique and neural network techniques could optimise the result.

  17. Quantification of Liver Proton-Density Fat Fraction in an 7.1 Tesla preclinical MR Systems: Impact of the Fitting Technique

    PubMed Central

    Mahlke, C; Hernando, D; Jahn, C; Cigliano, A; Ittermann, T; Mössler, A; Kromrey, ML; Domaska, G; Reeder, SB; Kühn, JP

    2016-01-01

    Purpose To investigate the feasibility of estimating the proton-density fat fraction (PDFF) using a 7.1 Tesla magnetic resonance imaging (MRI) system and to compare the accuracy of liver fat quantification using different fitting approaches. Materials and Methods Fourteen leptin-deficient ob/ob mice and eight intact controls were examined in a 7.1 Tesla animal scanner using a 3-dimensional six-echo chemical shift-encoded pulse sequence. Confounder-corrected PDFF was calculated using magnitude (magnitude data alone) and combined fitting (complex and magnitude data). Differences between fitting techniques were compared using Bland-Altman analysis. In addition, PDFFs derived with both reconstructions were correlated with histopathological fat content and triglyceride mass fraction using linear regression analysis. Results The PDFFs determined with use of both reconstructions correlated very strongly (r=0.91). However, small mean bias between reconstructions demonstrated divergent results (3.9%; CI 2.7%-5.1%). For both reconstructions, there was linear correlation with histopathology (combined fitting: r=0.61; magnitude fitting: r=0.64) and triglyceride content (combined fitting: r=0.79; magnitude fitting: r=0.70). Conclusion Liver fat quantification using the PDFF derived from MRI performed at 7.1 Tesla is feasible. PDFF has strong correlations with histopathologically determined fat and with triglyceride content. However, small differences between PDFF reconstruction techniques may impair the robustness and reliability of the biomarker at 7.1 Tesla. PMID:27197806

  18. Comparison of stability and control parameters for a light, single-engine, high-winged aircraft using different flight test and parameter estimation techniques

    NASA Technical Reports Server (NTRS)

    Suit, W. T.; Cannaday, R. L.

    1979-01-01

    The longitudinal and lateral stability and control parameters for a high wing, general aviation, airplane are examined. Estimations using flight data obtained at various flight conditions within the normal range of the aircraft are presented. The estimations techniques, an output error technique (maximum likelihood) and an equation error technique (linear regression), are presented. The longitudinal static parameters are estimated from climbing, descending, and quasi steady state flight data. The lateral excitations involve a combination of rudder and ailerons. The sensitivity of the aircraft modes of motion to variations in the parameter estimates are discussed.

  19. A Method for Calculating Strain Energy Release Rates in Preliminary Design of Composite Skin/Stringer Debonding Under Multi-Axial Loading

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; OBrien, T. Kevin

    1999-01-01

    Three simple procedures were developed to determine strain energy release rates, G, in composite skin/stringer specimens for various combinations of unaxial and biaxial (in-plane/out-of-plane) loading conditions. These procedures may be used for parametric design studies in such a way that only a few finite element computations will be necessary for a study of many load combinations. The results were compared with mixed mode strain energy release rates calculated directly from nonlinear two-dimensional plane-strain finite element analyses using the virtual crack closure technique. The first procedure involved solving three unknown parameters needed to determine the energy release rates. Good agreement was obtained when the external loads were used in the expression derived. This superposition technique was only applicable if the structure exhibits a linear load/deflection behavior. Consequently, a second technique was derived which was applicable in the case of nonlinear load/deformation behavior. The technique involved calculating six unknown parameters from a set of six simultaneous linear equations with data from six nonlinear analyses to determine the energy release rates. This procedure was not time efficient, and hence, less appealing. A third procedure was developed to calculate mixed mode energy release rates as a function of delamination lengths. This procedure required only one nonlinear finite element analysis of the specimen with a single delamination length to obtain a reference solution for the energy release rates and the scale factors. The delamination was extended in three separate linear models of the local area in the vicinity of the delamination subjected to unit loads to obtain the distribution of G with delamination lengths. This set of sub-problems was Although additional modeling effort is required to create the sub- models, this local technique is efficient for parametric studies.

  20. A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.

  1. A unified development of several techniques for the representation of random vectors and data sets

    NASA Technical Reports Server (NTRS)

    Bundick, W. T.

    1973-01-01

    Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.

  2. Finite-dimensional linear approximations of solutions to general irregular nonlinear operator equations and equations with quadratic operators

    NASA Astrophysics Data System (ADS)

    Kokurin, M. Yu.

    2010-11-01

    A general scheme for improving approximate solutions to irregular nonlinear operator equations in Hilbert spaces is proposed and analyzed in the presence of errors. A modification of this scheme designed for equations with quadratic operators is also examined. The technique of universal linear approximations of irregular equations is combined with the projection onto finite-dimensional subspaces of a special form. It is shown that, for finite-dimensional quadratic problems, the proposed scheme provides information about the global geometric properties of the intersections of quadrics.

  3. A General Multidimensional Model for the Measurement of Cultural Differences.

    ERIC Educational Resources Information Center

    Olmedo, Esteban L.; Martinez, Sergio R.

    A multidimensional model for measuring cultural differences (MCD) based on factor analytic theory and techniques is proposed. The model assumes that a cultural space may be defined by means of a relatively small number of orthogonal dimensions which are linear combinations of a much larger number of cultural variables. Once a suitable,…

  4. A Simple Demonstration of Atomic and Molecular Orbitals Using Circular Magnets

    ERIC Educational Resources Information Center

    Chakraborty, Maharudra; Mukhopadhyay, Subrata; Das, Ranendu Sekhar

    2014-01-01

    A quite simple and inexpensive technique is described here to represent the approximate shapes of atomic orbitals and the molecular orbitals formed by them following the principles of the linear combination of atomic orbitals (LCAO) method. Molecular orbitals of a few simple molecules can also be pictorially represented. Instructors can employ the…

  5. Difference-Equation/Flow-Graph Circuit Analysis

    NASA Technical Reports Server (NTRS)

    Mcvey, I. M.

    1988-01-01

    Numerical technique enables rapid, approximate analyses of electronic circuits containing linear and nonlinear elements. Practiced in variety of computer languages on large and small computers; for circuits simple enough, programmable hand calculators used. Although some combinations of circuit elements make numerical solutions diverge, enables quick identification of divergence and correction of circuit models to make solutions converge.

  6. Spatial effect of new municipal solid waste landfill siting using different guidelines.

    PubMed

    Ahmad, Siti Zubaidah; Ahamad, Mohd Sanusi S; Yusoff, Mohd Suffian

    2014-01-01

    Proper implementation of landfill siting with the right regulations and constraints can prevent undesirable long-term effects. Different countries have respective guidelines on criteria for new landfill sites. In this article, we perform a comparative study of municipal solid waste landfill siting criteria stated in the policies and guidelines of eight different constitutional bodies from Malaysia, Australia, India, U.S.A., Europe, China and the Middle East, and the World Bank. Subsequently, a geographic information system (GIS) multi-criteria evaluation model was applied to determine new suitable landfill sites using different criterion parameters using a constraint mapping technique and weighted linear combination. Application of Macro Modeler provided in the GIS-IDRISI Andes software helps in building and executing multi-step models. In addition, the analytic hierarchy process technique was included to determine the criterion weight of the decision maker's preferences as part of the weighted linear combination procedure. The differences in spatial results of suitable sites obtained signifies that dissimilarity in guideline specifications and requirements will have an effect on the decision-making process.

  7. Parallel iterative solution for h and p approximations of the shallow water equations

    USGS Publications Warehouse

    Barragy, E.J.; Walters, R.A.

    1998-01-01

    A p finite element scheme and parallel iterative solver are introduced for a modified form of the shallow water equations. The governing equations are the three-dimensional shallow water equations. After a harmonic decomposition in time and rearrangement, the resulting equations are a complex Helmholz problem for surface elevation, and a complex momentum equation for the horizontal velocity. Both equations are nonlinear and the resulting system is solved using the Picard iteration combined with a preconditioned biconjugate gradient (PBCG) method for the linearized subproblems. A subdomain-based parallel preconditioner is developed which uses incomplete LU factorization with thresholding (ILUT) methods within subdomains, overlapping ILUT factorizations for subdomain boundaries and under-relaxed iteration for the resulting block system. The method builds on techniques successfully applied to linear elements by introducing ordering and condensation techniques to handle uniform p refinement. The combined methods show good performance for a range of p (element order), h (element size), and N (number of processors). Performance and scalability results are presented for a field scale problem where up to 512 processors are used. ?? 1998 Elsevier Science Ltd. All rights reserved.

  8. A computerized symbolic integration technique for development of triangular and quadrilateral composite shallow-shell finite elements

    NASA Technical Reports Server (NTRS)

    Anderson, C. M.; Noor, A. K.

    1975-01-01

    Computerized symbolic integration was used in conjunction with group-theoretic techniques to obtain analytic expressions for the stiffness, geometric stiffness, consistent mass, and consistent load matrices of composite shallow shell structural elements. The elements are shear flexible and have variable curvature. A stiffness (displacement) formulation was used with the fundamental unknowns consisting of both the displacement and rotation components of the reference surface of the shell. The triangular elements have six and ten nodes; the quadrilateral elements have four and eight nodes and can have internal degrees of freedom associated with displacement modes which vanish along the edges of the element (bubble modes). The stiffness, geometric stiffness, consistent mass, and consistent load coefficients are expressed as linear combinations of integrals (over the element domain) whose integrands are products of shape functions and their derivatives. The evaluation of the elemental matrices is divided into two separate problems - determination of the coefficients in the linear combination and evaluation of the integrals. The integrals are performed symbolically by using the symbolic-and-algebraic-manipulation language MACSYMA. The efficiency of using symbolic integration in the element development is demonstrated by comparing the number of floating-point arithmetic operations required in this approach with those required by a commonly used numerical quadrature technique.

  9. Progress in multidisciplinary design optimization at NASA Langley

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    1993-01-01

    Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.

  10. Nonlinear random response prediction using MSC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Robinson, J. H.; Chiang, C. K.; Rizzi, S. A.

    1993-01-01

    An equivalent linearization technique was incorporated into MSC/NASTRAN to predict the nonlinear random response of structures by means of Direct Matrix Abstract Programming (DMAP) modifications and inclusion of the nonlinear differential stiffness module inside the iteration loop. An iterative process was used to determine the rms displacements. Numerical results obtained for validation on simple plates and beams are in good agreement with existing solutions in both the linear and linearized regions. The versatility of the implementation will enable the analyst to determine the nonlinear random responses for complex structures under combined loads. The thermo-acoustic response of a hexagonal thermal protection system panel is used to highlight some of the features of the program.

  11. A study of methods to predict and measure the transmission of sound through the walls of light aircraft. Integration of certain singular boundary element integrals for applications in linear acoustics

    NASA Technical Reports Server (NTRS)

    Zimmerle, D.; Bernhard, R. J.

    1985-01-01

    An alternative method for performing singular boundary element integrals for applications in linear acoustics is discussed. The method separates the integral of the characteristic solution into a singular and nonsingular part. The singular portion is integrated with a combination of analytic and numerical techniques while the nonsingular portion is integrated with standard Gaussian quadrature. The method may be generalized to many types of subparametric elements. The integrals over elements containing the root node are considered, and the characteristic solution for linear acoustic problems are examined. The method may be generalized to most characteristic solutions.

  12. Computer-Aided Diagnosis System for Alzheimer's Disease Using Different Discrete Transform Techniques.

    PubMed

    Dessouky, Mohamed M; Elrashidy, Mohamed A; Taha, Taha E; Abdelkader, Hatem M

    2016-05-01

    The different discrete transform techniques such as discrete cosine transform (DCT), discrete sine transform (DST), discrete wavelet transform (DWT), and mel-scale frequency cepstral coefficients (MFCCs) are powerful feature extraction techniques. This article presents a proposed computer-aided diagnosis (CAD) system for extracting the most effective and significant features of Alzheimer's disease (AD) using these different discrete transform techniques and MFCC techniques. Linear support vector machine has been used as a classifier in this article. Experimental results conclude that the proposed CAD system using MFCC technique for AD recognition has a great improvement for the system performance with small number of significant extracted features, as compared with the CAD system based on DCT, DST, DWT, and the hybrid combination methods of the different transform techniques. © The Author(s) 2015.

  13. Non-destructive analysis of sensory traits of dry-cured loins by MRI-computer vision techniques and data mining.

    PubMed

    Caballero, Daniel; Antequera, Teresa; Caro, Andrés; Ávila, María Del Mar; G Rodríguez, Pablo; Perez-Palacios, Trinidad

    2017-07-01

    Magnetic resonance imaging (MRI) combined with computer vision techniques have been proposed as an alternative or complementary technique to determine the quality parameters of food in a non-destructive way. The aim of this work was to analyze the sensory attributes of dry-cured loins using this technique. For that, different MRI acquisition sequences (spin echo, gradient echo and turbo 3D), algorithms for MRI analysis (GLCM, NGLDM, GLRLM and GLCM-NGLDM-GLRLM) and predictive data mining techniques (multiple linear regression and isotonic regression) were tested. The correlation coefficient (R) and mean absolute error (MAE) were used to validate the prediction results. The combination of spin echo, GLCM and isotonic regression produced the most accurate results. In addition, the MRI data from dry-cured loins seems to be more suitable than the data from fresh loins. The application of predictive data mining techniques on computational texture features from the MRI data of loins enables the determination of the sensory traits of dry-cured loins in a non-destructive way. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  14. Detector power linearity requirements and verification techniques for TMI direct detection receivers

    NASA Technical Reports Server (NTRS)

    Reinhardt, Victor S. (Inventor); Shih, Yi-Chi (Inventor); Toth, Paul A. (Inventor); Reynolds, Samuel C. (Inventor)

    1997-01-01

    A system (36, 98) for determining the linearity of an RF detector (46, 106). A first technique involves combining two RF signals from two stable local oscillators (38, 40) to form a modulated RF signal having a beat frequency, and applying the modulated RF signal to a detector (46) being tested. The output of the detector (46) is applied to a low frequency spectrum analyzer (48) such that a relationship between the power levels of the first and second harmonics generated by the detector (46) of the beat frequency of the modulated RF signal are measured by the spectrum analyzer (48) to determine the linearity of the detector (46). In a second technique, an RF signal from a local oscillator (100) is applied to a detector (106) being tested through a first attenuator (102) and a second attenuator (104). The output voltage of the detector (106) is measured when the first attenuator (102) is set to a particular attenuation value and the second attenuator (104) is switched between first and second attenuation values. Further, the output voltage of the detector (106) is measured when the first attenuator (102) is set to another attenuation value, and the second attenuator (104) is again switched between the first and second attenuation values. A relationship between the voltage outputs determines the linearity of the detector (106).

  15. Beyond endoscopic assessment in inflammatory bowel disease: real-time histology of disease activity by non-linear multimodal imaging

    NASA Astrophysics Data System (ADS)

    Chernavskaia, Olga; Heuke, Sandro; Vieth, Michael; Friedrich, Oliver; Schürmann, Sebastian; Atreya, Raja; Stallmach, Andreas; Neurath, Markus F.; Waldner, Maximilian; Petersen, Iver; Schmitt, Michael; Bocklitz, Thomas; Popp, Jürgen

    2016-07-01

    Assessing disease activity is a prerequisite for an adequate treatment of inflammatory bowel diseases (IBD) such as Crohn’s disease and ulcerative colitis. In addition to endoscopic mucosal healing, histologic remission poses a promising end-point of IBD therapy. However, evaluating histological remission harbors the risk for complications due to the acquisition of biopsies and results in a delay of diagnosis because of tissue processing procedures. In this regard, non-linear multimodal imaging techniques might serve as an unparalleled technique that allows the real-time evaluation of microscopic IBD activity in the endoscopy unit. In this study, tissue sections were investigated using the non-linear multimodal microscopy combination of coherent anti-Stokes Raman scattering (CARS), two-photon excited auto fluorescence (TPEF) and second-harmonic generation (SHG). After the measurement a gold-standard assessment of histological indexes was carried out based on a conventional H&E stain. Subsequently, various geometry and intensity related features were extracted from the multimodal images. An optimized feature set was utilized to predict histological index levels based on a linear classifier. Based on the automated prediction, the diagnosis time interval is decreased. Therefore, non-linear multimodal imaging may provide a real-time diagnosis of IBD activity suited to assist clinical decision making within the endoscopy unit.

  16. Feature combinations and the divergence criterion

    NASA Technical Reports Server (NTRS)

    Decell, H. P., Jr.; Mayekar, S. M.

    1976-01-01

    Classifying large quantities of multidimensional remotely sensed agricultural data requires efficient and effective classification techniques and the construction of certain transformations of a dimension reducing, information preserving nature. The construction of transformations that minimally degrade information (i.e., class separability) is described. Linear dimension reducing transformations for multivariate normal populations are presented. Information content is measured by divergence.

  17. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    NASA Astrophysics Data System (ADS)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  18. Simultaneously driven linear and nonlinear spatial encoding fields in MRI.

    PubMed

    Gallichan, Daniel; Cocosco, Chris A; Dewdney, Andrew; Schultz, Gerrit; Welz, Anna; Hennig, Jürgen; Zaitsev, Maxim

    2011-03-01

    Spatial encoding in MRI is conventionally achieved by the application of switchable linear encoding fields. The general concept of the recently introduced PatLoc (Parallel Imaging Technique using Localized Gradients) encoding is to use nonlinear fields to achieve spatial encoding. Relaxing the requirement that the encoding fields must be linear may lead to improved gradient performance or reduced peripheral nerve stimulation. In this work, a custom-built insert coil capable of generating two independent quadratic encoding fields was driven with high-performance amplifiers within a clinical MR system. In combination with the three linear encoding fields, the combined hardware is capable of independently manipulating five spatial encoding fields. With the linear z-gradient used for slice-selection, there remain four separate channels to encode a 2D-image. To compare trajectories of such multidimensional encoding, the concept of a local k-space is developed. Through simulations, reconstructions using six gradient-encoding strategies were compared, including Cartesian encoding separately or simultaneously on both PatLoc and linear gradients as well as two versions of a radial-based in/out trajectory. Corresponding experiments confirmed that such multidimensional encoding is practically achievable and demonstrated that the new radial-based trajectory offers the PatLoc property of variable spatial resolution while maintaining finite resolution across the entire field-of-view. Copyright © 2010 Wiley-Liss, Inc.

  19. Koopman Operator Framework for Time Series Modeling and Analysis

    NASA Astrophysics Data System (ADS)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  20. Geopotential Error Analysis from Satellite Gradiometer and Global Positioning System Observables on Parallel Architecture

    NASA Technical Reports Server (NTRS)

    Schutz, Bob E.; Baker, Gregory A.

    1997-01-01

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  1. Digital processing of array seismic recordings

    USGS Publications Warehouse

    Ryall, Alan; Birtill, John

    1962-01-01

    This technical letter contains a brief review of the operations which are involved in digital processing of array seismic recordings by the methods of velocity filtering, summation, cross-multiplication and integration, and by combinations of these operations (the "UK Method" and multiple correlation). Examples are presented of analyses by the several techniques on array recordings which were obtained by the U.S. Geological Survey during chemical and nuclear explosions in the western United States. Seismograms are synthesized using actual noise and Pn-signal recordings, such that the signal-to-noise ratio, onset time and velocity of the signal are predetermined for the synthetic record. These records are then analyzed by summation, cross-multiplication, multiple correlation and the UK technique, and the results are compared. For all of the examples presented, analysis by the non-linear techniques of multiple correlation and cross-multiplication of the traces on an array recording are preferred to analyses by the linear operations involved in summation and the UK Method.

  2. Geopotential error analysis from satellite gradiometer and global positioning system observables on parallel architectures

    NASA Astrophysics Data System (ADS)

    Baker, Gregory Allen

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  3. Application of third molar development and eruption models in estimating dental age in Malay sub-adults.

    PubMed

    Mohd Yusof, Mohd Yusmiaidil Putera; Cauwels, Rita; Deschepper, Ellen; Martens, Luc

    2015-08-01

    The third molar development (TMD) has been widely utilized as one of the radiographic method for dental age estimation. By using the same radiograph of the same individual, third molar eruption (TME) information can be incorporated to the TMD regression model. This study aims to evaluate the performance of dental age estimation in individual method models and the combined model (TMD and TME) based on the classic regressions of multiple linear and principal component analysis. A sample of 705 digital panoramic radiographs of Malay sub-adults aged between 14.1 and 23.8 years was collected. The techniques described by Gleiser and Hunt (modified by Kohler) and Olze were employed to stage the TMD and TME, respectively. The data was divided to develop three respective models based on the two regressions of multiple linear and principal component analysis. The trained models were then validated on the test sample and the accuracy of age prediction was compared between each model. The coefficient of determination (R²) and root mean square error (RMSE) were calculated. In both genders, adjusted R² yielded an increment in the linear regressions of combined model as compared to the individual models. The overall decrease in RMSE was detected in combined model as compared to TMD (0.03-0.06) and TME (0.2-0.8). In principal component regression, low value of adjusted R(2) and high RMSE except in male were exhibited in combined model. Dental age estimation is better predicted using combined model in multiple linear regression models. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  4. A simplified diagnostic model of orographic rainfall for enhancing satellite-based rainfall estimates in data-poor regions

    USGS Publications Warehouse

    Funk, Christopher C.; Michaelsen, Joel C.

    2004-01-01

    An extension of Sinclair's diagnostic model of orographic precipitation (“VDEL”) is developed for use in data-poor regions to enhance rainfall estimates. This extension (VDELB) combines a 2D linearized internal gravity wave calculation with the dot product of the terrain gradient and surface wind to approximate terrain-induced vertical velocity profiles. Slope, wind speed, and stability determine the velocity profile, with either sinusoidal or vertically decaying (evanescent) solutions possible. These velocity profiles replace the parameterized functions in the original VDEL, creating VDELB, a diagnostic accounting for buoyancy effects. A further extension (VDELB*) uses an on/off constraint derived from reanalysis precipitation fields. A validation study over 365 days in the Pacific Northwest suggests that VDELB* can best capture seasonal and geographic variations. A new statistical data-fusion technique is presented and is used to combine VDELB*, reanalysis, and satellite rainfall estimates in southern Africa. The technique, matched filter regression (MFR), sets the variance of the predictors equal to their squared correlation with observed gauge data and predicts rainfall based on the first principal component of the combined data. In the test presented here, mean absolute errors from the MFR technique were 35% lower than the satellite estimates alone. VDELB assumes a linear solution to the wave equations and a Boussinesq atmosphere, and it may give unrealistic responses under extreme conditions. Nonetheless, the results presented here suggest that diagnostic models, driven by reanalysis data, can be used to improve satellite rainfall estimates in data-sparse regions.

  5. Near-infrared Raman spectroscopy for estimating biochemical changes associated with different pathological conditions of cervix

    NASA Astrophysics Data System (ADS)

    Daniel, Amuthachelvi; Prakasarao, Aruna; Ganesan, Singaravelu

    2018-02-01

    The molecular level changes associated with oncogenesis precede the morphological changes in cells and tissues. Hence molecular level diagnosis would promote early diagnosis of the disease. Raman spectroscopy is capable of providing specific spectral signature of various biomolecules present in the cells and tissues under various pathological conditions. The aim of this work is to develop a non-linear multi-class statistical methodology for discrimination of normal, neoplastic and malignant cells/tissues. The tissues were classified as normal, pre-malignant and malignant by employing Principal Component Analysis followed by Artificial Neural Network (PC-ANN). The overall accuracy achieved was 99%. Further, to get an insight into the quantitative biochemical composition of the normal, neoplastic and malignant tissues, a linear combination of the major biochemicals by non-negative least squares technique was fit to the measured Raman spectra of the tissues. This technique confirms the changes in the major biomolecules such as lipids, nucleic acids, actin, glycogen and collagen associated with the different pathological conditions. To study the efficacy of this technique in comparison with histopathology, we have utilized Principal Component followed by Linear Discriminant Analysis (PC-LDA) to discriminate the well differentiated, moderately differentiated and poorly differentiated squamous cell carcinoma with an accuracy of 94.0%. And the results demonstrated that Raman spectroscopy has the potential to complement the good old technique of histopathology.

  6. Eutrophic water purification efficiency using a combination of hydrodynamic cavitation and ozonation on a pilot scale.

    PubMed

    Li, Wei-Xin; Tang, Chuan-Dong; Wu, Zhi-Lin; Wang, Wei-Min; Zhang, Yu-Feng; Zhao, Yi; Cravotto, Giancarlo

    2015-04-01

    This paper presents the purification of eutrophic water using a combination of hydrodynamic cavitation (HC) and ozonation (O3) at a continuous flow of 0.8 m(3) h(-1) on a pilot scale. The maximum removal rate of chlorophyll a using O3 alone and the HC/O3 combination was 62.3 and 78.8%, respectively, under optimal conditions, where the ozone utilization efficiency was 64.5 and 94.8% and total energy consumption was 8.89 and 8.25 kWh m(-3), respectively. Thus, the removal rate of chlorophyll a and the ozone utilization efficiency were improved by 26.5% and 46.9%, respectively, by using the combined technique. Meanwhile, total energy consumption was reduced by 7.2%. Turbidity linearly decreased with chlorophyll a removal rate, but no linear relationship exists between the removal of COD or UV254 and chlorophyll a. As expected, the suction-cavitation-assisted O3 exhibited higher energy efficiency than the extrusion-cavitation-assisted O3 and O3 alone methods.

  7. Modal characteristics of a simplified brake rotor model using semi-analytical Rayleigh Ritz method

    NASA Astrophysics Data System (ADS)

    Zhang, F.; Cheng, L.; Yam, L. H.; Zhou, L. M.

    2006-10-01

    Emphasis of this paper is given to the modal characteristics of a brake rotor which is utilized in automotive disc brake system. The brake rotor is modeled as a combined structure comprising an annular plate connected to a segment of cylindrical shell by distributed artificial springs. Modal analysis shows the existence of three types of modes for the combined structure, depending on the involvement of each substructure. A decomposition technique is proposed, allowing each mode of the combined structure to be decomposed into a linear combination of the individual substructure modes. It is shown that the decomposition coefficients provide a direct and systematic means to carry out modal classification and quantification.

  8. Locally linear embedding: dimension reduction of massive protostellar spectra

    NASA Astrophysics Data System (ADS)

    Ward, J. L.; Lumsden, S. L.

    2016-09-01

    We present the results of the application of locally linear embedding (LLE) to reduce the dimensionality of dereddened and continuum subtracted near-infrared spectra using a combination of models and real spectra of massive protostars selected from the Red MSX Source survey data base. A brief comparison is also made with two other dimension reduction techniques; principal component analysis (PCA) and Isomap using the same set of spectra as well as a more advanced form of LLE, Hessian locally linear embedding. We find that whilst LLE certainly has its limitations, it significantly outperforms both PCA and Isomap in classification of spectra based on the presence/absence of emission lines and provides a valuable tool for classification and analysis of large spectral data sets.

  9. A study of fast ionic conductors by positron annihilation

    NASA Astrophysics Data System (ADS)

    Wang, Yung-Yu; Yang, Ju-Hua; Pan, Xiao-Liang; Lei, Zhen-Xi

    1988-06-01

    New fast ionic conductor systems of LiCl-LiF-B2O3 and LiF-B2O3 were studied by using the positron annihilation technique. It was found that the mid-life intensity I2 in positron annihilation has a linear relationship with the material's electrical conductivity log sigma. This result, combined with the measurement result on the linear annihilation parameter, indicated that the voids between microcrystals and network phases provided more transfer paths in the micro-crystalline LiF-LiCl-B2O3 system, which led to improved electrical conductivity in this type of material.

  10. Intensity Mapping Foreground Cleaning with Generalized Needlet Internal Linear Combination

    NASA Astrophysics Data System (ADS)

    Olivari, L. C.; Remazeilles, M.; Dickinson, C.

    2018-05-01

    Intensity mapping (IM) is a new observational technique to survey the large-scale structure of matter using spectral emission lines. IM observations are contaminated by instrumental noise and astrophysical foregrounds. The foregrounds are at least three orders of magnitude larger than the searched signals. In this work, we apply the Generalized Needlet Internal Linear Combination (GNILC) method to subtract radio foregrounds and to recover the cosmological HI and CO signals within the IM context. For the HI IM case, we find that GNILC can reconstruct the HI plus noise power spectra with 7.0% accuracy for z = 0.13 - 0.48 (960 - 1260 MHz) and l <~ 400, while for the CO IM case, we find that it can reconstruct the CO plus noise power spectra with 6.7% accuracy for z = 2.4 - 3.4 (26 - 34 GHz) and l <~ 3000.

  11. Novel RF and microwave components employing ferroelectric and solid-state tunable capacitors for multi-functional wireless communication systems

    NASA Astrophysics Data System (ADS)

    Tombak, Ali

    The recent advancement in wireless communications demands an ever increasing improvement in the system performance and functionality with a reduced size and cost. This thesis demonstrates novel RF and microwave components based on ferroelectric and solid-state based tunable capacitor (varactor) technologies for the design of low-cost, small-size and multi-functional wireless communication systems. These include tunable lumped element VHF filters based on ferroelectric varactors, a beam-steering technique which, unlike conventional systems, does not require separate power divider and phase shifters, and a predistortion linearization technique that uses a varactor based tunable R-L-C resonator. Among various ferroelectric materials, Barium Strontium Titanate (BST) is actively being studied for the fabrication of high performance varactors at RF and microwave frequencies. BST based tunable capacitors are presented with typical tunabilities of 4.2:1 with the application of 5 to 10 V DC bias voltages and typical loss tangents in the range of 0.003--0.009 at VHF frequencies. Tunable lumped element lowpass and bandpass VHF filters based on BST varactors are also demonstrated with tunabilities of 40% and 57%, respectively. A new beam-steering technique is developed based on the extended resonance power dividing technique. Phased arrays based on this technique do not require separate power divider and phase shifters. Instead, the power division and phase shifting circuits are combined into a single circuit, which utilizes tunable capacitors. This results in a substantial reduction in the circuit complexity and cost. Phased arrays based on this technique can be employed in mobile multimedia services and automotive collision avoidance radars. A 2-GHz 4-antenna and a 10-GHz 8-antenna extended resonance phased arrays are demonstrated with scan ranges of 20 degrees and 18 degrees, respectively. A new predistortion linearization technique for the linearization of RF/microwave power amplifiers is also presented. This technique utilizes a varactor based tunable R-L-C resonator in shunt configuration. Due to the small number of circuit elements required, linearizers based on this technique offer low-cost and simple circuitry, hence can be utilized in handheld and cellular applications. A 1.8 GHz power amplifier with 9 dB gain is linearized using this technique. The linearizer improves the output 1-dB compression point of the power amplifier from 21 to 22.8 dBm. Adjacent channel power ratio (ACPR) is improved approximately 11 dB at an output RF power level of 17.5 dBm. The thesis is concluded by summarizing the main achievements and discussing the future work directions.

  12. Space construction base control system

    NASA Technical Reports Server (NTRS)

    Kaczynski, R. F.

    1979-01-01

    Several approaches for an attitude control system are studied and developed for a large space construction base that is structurally flexible. Digital simulations were obtained using the following techniques: (1) the multivariable Nyquist array method combined with closed loop pole allocation, (2) the linear quadratic regulator method. Equations for the three-axis simulation using the multilevel control method were generated and are presented. Several alternate control approaches are also described. A technique is demonstrated for obtaining the dynamic structural properties of a vehicle which is constructed of two or more submodules of known dynamic characteristics.

  13. Localized surface plasmon resonances in nanostructures to enhance nonlinear vibrational spectroscopies: towards an astonishing molecular sensitivity

    PubMed Central

    2014-01-01

    Summary Vibrational transitions contain some of the richest fingerprints of molecules and materials, providing considerable physicochemical information. Vibrational transitions can be characterized by different spectroscopies, and alternatively by several imaging techniques enabling to reach sub-microscopic spatial resolution. In a quest to always push forward the detection limit and to lower the number of needed vibrational oscillators to get a reliable signal or imaging contrast, surface plasmon resonances (SPR) are extensively used to increase the local field close to the oscillators. Another approach is based on maximizing the collective response of the excited vibrational oscillators through molecular coherence. Both features are often naturally combined in vibrational nonlinear optical techniques. In this frame, this paper reviews the main achievements of the two most common vibrational nonlinear optical spectroscopies, namely surface-enhanced sum-frequency generation (SE-SFG) and surface-enhanced coherent anti-Stokes Raman scattering (SE-CARS). They can be considered as the nonlinear counterpart and/or combination of the linear surface-enhanced infrared absorption (SEIRA) and surface-enhanced Raman scattering (SERS) techniques, respectively, which are themselves a branching of the conventional IR and spontaneous Raman spectroscopies. Compared to their linear equivalent, those nonlinear vibrational spectroscopies have proved to reach higher sensitivity down to the single molecule level, opening the way to astonishing perspectives for molecular analysis. PMID:25551056

  14. Construction of Ligand Group Orbitals for Polyatomics and Transition-Metal Complexes Using an Intuitive Symmetry-Based Approach

    ERIC Educational Resources Information Center

    Johnson, Adam R.

    2013-01-01

    A molecular orbital (MO) diagram, especially its frontier orbitals, explains the bonding and reactivity for a chemical compound. It is therefore important for students to learn how to construct one. The traditional methods used to derive these diagrams rely on linear algebra techniques to combine ligand orbitals into symmetry-adapted linear…

  15. HYDRORECESSION: A toolbox for streamflow recession analysis

    NASA Astrophysics Data System (ADS)

    Arciniega, S.

    2015-12-01

    Streamflow recession curves are hydrological signatures allowing to study the relationship between groundwater storage and baseflow and/or low flows at the catchment scale. Recent studies have showed that streamflow recession analysis can be quite sensitive to the combination of different models, extraction techniques and parameter estimation methods. In order to better characterize streamflow recession curves, new methodologies combining multiple approaches have been recommended. The HYDRORECESSION toolbox, presented here, is a Matlab graphical user interface developed to analyse streamflow recession time series with the support of different tools allowing to parameterize linear and nonlinear storage-outflow relationships through four of the most useful recession models (Maillet, Boussinesq, Coutagne and Wittenberg). The toolbox includes four parameter-fitting techniques (linear regression, lower envelope, data binning and mean squared error) and three different methods to extract hydrograph recessions segments (Vogel, Brutsaert and Aksoy). In addition, the toolbox has a module that separates the baseflow component from the observed hydrograph using the inverse reservoir algorithm. Potential applications provided by HYDRORECESSION include model parameter analysis, hydrological regionalization and classification, baseflow index estimates, catchment-scale recharge and low-flows modelling, among others. HYDRORECESSION is freely available for non-commercial and academic purposes.

  16. Minimally Invasive Ponto Surgery compared to the linear incision technique without soft tissue reduction for bone conduction hearing implants: study protocol for a randomized controlled trial.

    PubMed

    Calon, Tim G A; van Hoof, Marc; van den Berge, Herbert; de Bruijn, Arthur J G; van Tongeren, Joost; Hof, Janny R; Brunings, Jan Wouter; Jonhede, Sofia; Anteunis, Lucien J C; Janssen, Miranda; Joore, Manuela A; Holmberg, Marcus; Johansson, Martin L; Stokroos, Robert J

    2016-11-09

    Over the last years, less invasive surgical techniques with soft tissue preservation for bone conduction hearing implants (BCHI) have been introduced such as the linear incision technique combined with a punch. Results using this technique seem favorable in terms of rate of peri-abutment dermatitis (PAD), esthetics, and preservation of skin sensibility. Recently, a new standardized surgical technique for BCHI placement, the Minimally Invasive Ponto Surgery (MIPS) technique has been developed by Oticon Medical AB (Askim, Sweden). This technique aims to standardize surgery by using a novel surgical instrumentation kit and minimize soft tissue trauma. A multicenter randomized controlled trial is designed to compare the MIPS technique to the linear incision technique with soft tissue preservation. The primary investigation center is Maastricht University Medical Center. Sixty-two participants will be included with a 2-year follow-up period. Parameters are introduced to quantify factors such as loss of skin sensibility, dehiscence of the skin next to the abutment, skin overgrowth, and cosmetic results. A new type of sampling method is incorporated to aid in the estimation of complications. To gain further understanding of PAD, swabs and skin biopsies are collected during follow-up visits for evaluation of the bacterial profile and inflammatory cytokine expression. The primary objective of the study is to compare the incidence of PAD during the first 3 months after BCHI placement. Secondary objectives include the assessment of parameters related to surgery, wound healing, pain, loss of sensibility of the skin around the implant, implant extrusion rate, implant stability measurements, dehiscence of the skin next to the abutment, and esthetic appeal. Tertiary objectives include assessment of other factors related to PAD and a health economic evaluation. This is the first trial to compare the recently developed MIPS technique to the linear incision technique with soft tissue preservation for BCHI surgery. Newly introduced parameters and sampling method will aid in the prediction of results and complications after BCHI placement. Registered at the CCMO register in the Netherlands on 24 November 2014: NL50072.068.14 . Retrospectively registered on 21 April 2015 at ClinicalTrials.gov: NCT02438618 . This trial is sponsored by Oticon Medical AB.

  17. A three-dimensional FEM-DEM technique for predicting the evolution of fracture in geomaterials and concrete

    NASA Astrophysics Data System (ADS)

    Zárate, Francisco; Cornejo, Alejandro; Oñate, Eugenio

    2018-07-01

    This paper extends to three dimensions (3D), the computational technique developed by the authors in 2D for predicting the onset and evolution of fracture in a finite element mesh in a simple manner based on combining the finite element method and the discrete element method (DEM) approach (Zárate and Oñate in Comput Part Mech 2(3):301-314, 2015). Once a crack is detected at an element edge, discrete elements are generated at the adjacent element vertexes and a simple DEM mechanism is considered in order to follow the evolution of the crack. The combination of the DEM with simple four-noded linear tetrahedron elements correctly captures the onset of fracture and its evolution, as shown in several 3D examples of application.

  18. Visualization of Global Sensitivity Analysis Results Based on a Combination of Linearly Dependent and Independent Directions

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.

  19. Opto-electronic characterization of third-generation solar cells.

    PubMed

    Neukom, Martin; Züfle, Simon; Jenatsch, Sandra; Ruhstaller, Beat

    2018-01-01

    We present an overview of opto-electronic characterization techniques for solar cells including light-induced charge extraction by linearly increasing voltage, impedance spectroscopy, transient photovoltage, charge extraction and more. Guidelines for the interpretation of experimental results are derived based on charge drift-diffusion simulations of solar cells with common performance limitations. It is investigated how nonidealities like charge injection barriers, traps and low mobilities among others manifest themselves in each of the studied cell characterization techniques. Moreover, comprehensive parameter extraction for an organic bulk-heterojunction solar cell comprising PCDTBT:PC 70 BM is demonstrated. The simulations reproduce measured results of 9 different experimental techniques. Parameter correlation is minimized due to the combination of various techniques. Thereby a route to comprehensive and accurate parameter extraction is identified.

  20. Flood Nowcasting With Linear Catchment Models, Radar and Kalman Filters

    NASA Astrophysics Data System (ADS)

    Pegram, Geoff; Sinclair, Scott

    A pilot study using real time rainfall data as input to a parsimonious linear distributed flood forecasting model is presented. The aim of the study is to deliver an operational system capable of producing flood forecasts, in real time, for the Mgeni and Mlazi catchments near the city of Durban in South Africa. The forecasts can be made at time steps which are of the order of a fraction of the catchment response time. To this end, the model is formulated in Finite Difference form in an equation similar to an Auto Regressive Moving Average (ARMA) model; it is this formulation which provides the required computational efficiency. The ARMA equation is a discretely coincident form of the State-Space equations that govern the response of an arrangement of linear reservoirs. This results in a functional relationship between the reservoir response con- stants and the ARMA coefficients, which guarantees stationarity of the ARMA model. Input to the model is a combined "Best Estimate" spatial rainfall field, derived from a combination of weather RADAR and Satellite rainfield estimates with point rain- fall given by a network of telemetering raingauges. Several strategies are employed to overcome the uncertainties associated with forecasting. Principle among these are the use of optimal (double Kalman) filtering techniques to update the model states and parameters in response to current streamflow observations and the application of short term forecasting techniques to provide future estimates of the rainfield as input to the model.

  1. Taming contact line instability for pattern formation

    PubMed Central

    Deblais, A.; Harich, R.; Colin, A.; Kellay, H.

    2016-01-01

    Coating surfaces with different fluids is prone to instability producing inhomogeneous films and patterns. The contact line between the coating fluid and the surface to be coated is host to different instabilities, limiting the use of a variety of coating techniques. Here we take advantage of the instability of a receding contact line towards cusp and droplet formation to produce linear patterns of variable spacings. We stabilize the instability of the cusps towards droplet formation by using polymer solutions that inhibit this secondary instability and give rise to long slender cylindrical filaments. We vary the speed of deposition to change the spacing between these filaments. The combination of the two gives rise to linear patterns into which different colloidal particles can be embedded, long DNA molecules can be stretched and particles filtered by size. The technique is therefore suitable to prepare anisotropic structures with variable properties. PMID:27506626

  2. Taming contact line instability for pattern formation.

    PubMed

    Deblais, A; Harich, R; Colin, A; Kellay, H

    2016-08-10

    Coating surfaces with different fluids is prone to instability producing inhomogeneous films and patterns. The contact line between the coating fluid and the surface to be coated is host to different instabilities, limiting the use of a variety of coating techniques. Here we take advantage of the instability of a receding contact line towards cusp and droplet formation to produce linear patterns of variable spacings. We stabilize the instability of the cusps towards droplet formation by using polymer solutions that inhibit this secondary instability and give rise to long slender cylindrical filaments. We vary the speed of deposition to change the spacing between these filaments. The combination of the two gives rise to linear patterns into which different colloidal particles can be embedded, long DNA molecules can be stretched and particles filtered by size. The technique is therefore suitable to prepare anisotropic structures with variable properties.

  3. A Hydrodynamic Instability Is Used to Create Aesthetically Appealing Patterns in Painting

    PubMed Central

    Zetina, Sandra; Godínez, Francisco A.; Zenit, Roberto

    2015-01-01

    Painters often acquire a deep empirical knowledge of the way in which paints and inks behave. Through experimentation and practice, they can control the way in which fluids move and deform to create textures and images. David Alfaro Siqueiros, a recognized Mexican muralist, invented an accidental painting technique to create new and unexpected textures. By pouring layers of paint of different colors on a horizontal surface, the paints infiltrate into each other creating patterns of aesthetic value. In this investigation, we reproduce the technique in a controlled manner. We found that for the correct color combination, the dual viscous layer becomes Rayleigh-Taylor unstable: the density mismatch of the two color paints drives the formation of a spotted pattern. Experiments and a linear instability analysis were conducted to understand the properties of the process. We also argue that this flow configuration can be used to study the linear properties of this instability. PMID:25942586

  4. Probabilistic vs linear blending approaches to shared control for wheelchair driving.

    PubMed

    Ezeh, Chinemelu; Trautman, Pete; Devigne, Louise; Bureau, Valentin; Babel, Marie; Carlson, Tom

    2017-07-01

    Some people with severe mobility impairments are unable to operate powered wheelchairs reliably and effectively, using commercially available interfaces. This has sparked a body of research into "smart wheelchairs", which assist users to drive safely and create opportunities for them to use alternative interfaces. Various "shared control" techniques have been proposed to provide an appropriate level of assistance that is satisfactory and acceptable to the user. Most shared control techniques employ a traditional strategy called linear blending (LB), where the user's commands and wheelchair's autonomous commands are combined in some proportion. In this paper, however, we implement a more generalised form of shared control called probabilistic shared control (PSC). This probabilistic formulation improves the accuracy of modelling the interaction between the user and the wheelchair by taking into account uncertainty in the interaction. In this paper, we demonstrate the practical success of PSC over LB in terms of safety, particularly for novice users.

  5. Nonlinear aeroservoelastic analysis of a controlled multiple-actuated-wing model with free-play

    NASA Astrophysics Data System (ADS)

    Huang, Rui; Hu, Haiyan; Zhao, Yonghui

    2013-10-01

    In this paper, the effects of structural nonlinearity due to free-play in both leading-edge and trailing-edge outboard control surfaces on the linear flutter control system are analyzed for an aeroelastic model of three-dimensional multiple-actuated-wing. The free-play nonlinearities in the control surfaces are modeled theoretically by using the fictitious mass approach. The nonlinear aeroelastic equations of the presented model can be divided into nine sub-linear modal-based aeroelastic equations according to the different combinations of deflections of the leading-edge and trailing-edge outboard control surfaces. The nonlinear aeroelastic responses can be computed based on these sub-linear aeroelastic systems. To demonstrate the effects of nonlinearity on the linear flutter control system, a single-input and single-output controller and a multi-input and multi-output controller are designed based on the unconstrained optimization techniques. The numerical results indicate that the free-play nonlinearity can lead to either limit cycle oscillations or divergent motions when the linear control system is implemented.

  6. Physical lumping methods for developing linear reduced models for high speed propulsion systems

    NASA Technical Reports Server (NTRS)

    Immel, S. M.; Hartley, Tom T.; Deabreu-Garcia, J. Alex

    1991-01-01

    In gasdynamic systems, information travels in one direction for supersonic flow and in both directions for subsonic flow. A shock occurs at the transition from supersonic to subsonic flow. Thus, to simulate these systems, any simulation method implemented for the quasi-one-dimensional Euler equations must have the ability to capture the shock. In this paper, a technique combining both backward and central differencing is presented. The equations are subsequently linearized about an operating point and formulated into a linear state space model. After proper implementation of the boundary conditions, the model order is reduced from 123 to less than 10 using the Schur method of balancing. Simulations comparing frequency and step response of the reduced order model and the original system models are presented.

  7. Development and Validation of Chemometric Spectrophotometric Methods for Simultaneous Determination of Simvastatin and Nicotinic Acid in Binary Combinations.

    PubMed

    Alahmad, Shoeb; Elfatatry, Hamed M; Mabrouk, Mokhtar M; Hammad, Sherin F; Mansour, Fotouh R

    2018-01-01

    The development and introduction of combined therapy represent a challenge for analysis due to severe overlapping of their UV spectra in case of spectroscopy or the requirement of a long tedious and high cost separation technique in case of chromatography. Quality control laboratories have to develop and validate suitable analytical procedures in order to assay such multi component preparations. New spectrophotometric methods for the simultaneous determination of simvastatin (SIM) and nicotinic acid (NIA) in binary combinations were developed. These methods are based on chemometric treatment of data, the applied chemometric techniques are multivariate methods including classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS). In these techniques, the concentration data matrix were prepared by using the synthetic mixtures containing SIM and NIA dissolved in ethanol. The absorbance data matrix corresponding to the concentration data matrix was obtained by measuring the absorbance at 12 wavelengths in the range 216 - 240 nm at 2 nm intervals in the zero-order. The spectrophotometric procedures do not require any separation step. The accuracy, precision and the linearity ranges of the methods have been determined and validated by analyzing synthetic mixtures containing the studied drugs. Chemometric spectrophotometric methods have been developed in the present study for the simultaneous determination of simvastatin and nicotinic acid in their synthetic binary mixtures and in their mixtures with possible excipients present in tablet dosage form. The validation was performed successfully. The developed methods have been shown to be accurate, linear, precise, and so simple. The developed methods can be used routinely for the determination dosage form. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Investigation of Periodic Nuclear Decay Data with Spectral Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Javorsek, D.; Sturrock, P.; Buncher, J.; Fischbach, E.; Gruenwald, T.; Hoft, A.; Horan, T.; Jenkins, J.; Kerford, J.; Lee, R.; Mattes, J.; Morris, D.; Mudry, R.; Newport, J.; Petrelli, M.; Silver, M.; Stewart, C.; Terry, B.; Willenberg, H.

    2009-12-01

    We provide the results from a spectral analysis of nuclear decay experiments displaying unexplained periodic fluctuations. The analyzed data was from 56Mn decay reported by the Children's Nutrition Research Center in Houston, 32Si decay reported by an experiment performed at the Brookhaven National Laboratory, and 226Ra decay reported by an experiment performed at the Physikalisch-Technische-Bundesanstalt in Germany. All three data sets possess the same primary frequency mode consisting of an annual period. Additionally a spectral comparison of the local ambient temperature, atmospheric pressure, relative humidity, Earth-Sun distance, and the plasma speed and latitude of the heliospheric current sheet (HCS) was performed. Following analysis of these six possible causal factors, their reciprocals, and their linear combinations, a possible link between nuclear decay rate fluctuations and the linear combination of the HCS latitude and 1/R motivates searching for a possible mechanism with such properties.

  9. Estimation of annual energy production using dynamic wake meandering in combination with ambient CFD solutions

    NASA Astrophysics Data System (ADS)

    Hahn, S.; Machefaux, E.; Hristov, Y. V.; Albano, M.; Threadgill, R.

    2016-09-01

    In the present study, combination of the standalone dynamic wake meandering (DWM) model with Reynolds-averaged Navier-Stokes (RANS) CFD solutions for ambient ABL flows is introduced, and its predictive performance for annual energy production (AEP) is evaluated against Vestas’ SCADA data for six operating wind farms over semi-complex terrains under neutral conditions. The performances of conventional linear and quadratic wake superposition techniques are also compared, together with the in-house implemention of successive hierarchical merging approaches. As compared to our standard procedure based on the Jensen model in WindPRO, the overall results are promising, leading to a significant improvement in AEP accuracy for four of the six sites. While the conventional linear superposition shows the best performance for the improved four sites, the hierarchical square superposition shows the least deteriorated result for the other two sites.

  10. Predictor-based control for an inverted pendulum subject to networked time delay.

    PubMed

    Ghommam, J; Mnif, F

    2017-03-01

    The inverted pendulum is considered as a special class of underactuated mechanical systems with two degrees of freedom and a single control input. This mechanical configuration allows to transform the underactuated system into a nonlinear system that is referred to as the normal form, whose control design techniques for stabilization are well known. In the presence of time delays, these control techniques may result in inadequate behavior and may even cause finite escape time in the controlled system. In this paper, a constructive method is presented to design a controller for an inverted pendulum characterized by a time-delayed balance control. First, the partial feedback linearization control for the inverted pendulum is modified and coupled with a state predictor to compensate for the delay. Several coordinate transformations are processed to transform the estimated partial linearized system into an upper-triangular form. Second, nested saturation and backstepping techniques are combined to derive the control law of the transformed system that would complete the design of the whole control input. The effectiveness of the proposed technique is illustrated by numerical simulations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Shape component analysis: structure-preserving dimension reduction on biological shape spaces.

    PubMed

    Lee, Hao-Chih; Liao, Tao; Zhang, Yongjie Jessica; Yang, Ge

    2016-03-01

    Quantitative shape analysis is required by a wide range of biological studies across diverse scales, ranging from molecules to cells and organisms. In particular, high-throughput and systems-level studies of biological structures and functions have started to produce large volumes of complex high-dimensional shape data. Analysis and understanding of high-dimensional biological shape data require dimension-reduction techniques. We have developed a technique for non-linear dimension reduction of 2D and 3D biological shape representations on their Riemannian spaces. A key feature of this technique is that it preserves distances between different shapes in an embedded low-dimensional shape space. We demonstrate an application of this technique by combining it with non-linear mean-shift clustering on the Riemannian spaces for unsupervised clustering of shapes of cellular organelles and proteins. Source code and data for reproducing results of this article are freely available at https://github.com/ccdlcmu/shape_component_analysis_Matlab The implementation was made in MATLAB and supported on MS Windows, Linux and Mac OS. geyang@andrew.cmu.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Aerofoil broadband and tonal noise modelling using stochastic sound sources and incorporated large scale fluctuations

    NASA Astrophysics Data System (ADS)

    Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.

    2017-12-01

    The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.

  13. Statistics based sampling for controller and estimator design

    NASA Astrophysics Data System (ADS)

    Tenne, Dirk

    The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.

  14. Glucose concentration measured by the hybrid coherent anti-Stokes Raman-scattering technique

    NASA Astrophysics Data System (ADS)

    Wang, Xi; Zhang, Aihua; Zhi, Miaochan; Sokolov, Alexei V.; Welch, George R.

    2010-01-01

    We investigate the possibility of using a hybrid coherent anti-Stokes Raman scattering technique for noninvasive monitoring of blood glucose levels. Our technique combines instantaneous coherent excitation of several characteristic molecular vibrations with subsequent probing of these vibrations by an optimally shaped, time-delayed, narrowband laser pulse. This pulse configuration mitigates the nonresonant four-wave mixing background while maximizing the Raman-resonant signal and allows rapid and highly specific detection even in the presence of multiple scattering. Under certain conditions we find that the measured signal is linearly proportional to the glucose concentration due to optical interference with the residual background light, which allows reliable detection of spectral signatures down to medically relevant glucose levels.

  15. An Optimized Integrator Windup Protection Technique Applied to a Turbofan Engine Control

    NASA Technical Reports Server (NTRS)

    Watts, Stephen R.; Garg, Sanjay

    1995-01-01

    This paper introduces a new technique for providing memoryless integrator windup protection which utilizes readily available optimization software tools. This integrator windup protection synthesis provides a concise methodology for creating integrator windup protection for each actuation system loop independently while assuring both controller and closed loop system stability. The individual actuation system loops' integrator windup protection can then be combined to provide integrator windup protection for the entire system. This technique is applied to an H(exp infinity) based multivariable control designed for a linear model of an advanced afterburning turbofan engine. The resulting transient characteristics are examined for the integrated system while encountering single and multiple actuation limits.

  16. A FORTRAN program for the analysis of linear continuous and sample-data systems

    NASA Technical Reports Server (NTRS)

    Edwards, J. W.

    1976-01-01

    A FORTRAN digital computer program which performs the general analysis of linearized control systems is described. State variable techniques are used to analyze continuous, discrete, and sampled data systems. Analysis options include the calculation of system eigenvalues, transfer functions, root loci, root contours, frequency responses, power spectra, and transient responses for open- and closed-loop systems. A flexible data input format allows the user to define systems in a variety of representations. Data may be entered by inputing explicit data matrices or matrices constructed in user written subroutines, by specifying transfer function block diagrams, or by using a combination of these methods.

  17. Simultaneous structural and control optimization via linear quadratic regulator eigenstructure assignment

    NASA Technical Reports Server (NTRS)

    Becus, G. A.; Lui, C. Y.; Venkayya, V. B.; Tischler, V. A.

    1987-01-01

    A method for simultaneous structural and control design of large flexible space structures (LFSS) to reduce vibration generated by disturbances is presented. Desired natural frequencies and damping ratios for the closed loop system are achieved by using a combination of linear quadratic regulator (LQR) synthesis and numerical optimization techniques. The state and control weighing matrices (Q and R) are expressed in terms of structural parameters such as mass and stiffness. The design parameters are selected by numerical optimization so as to minimize the weight of the structure and to achieve the desired closed-loop eigenvalues. An illustrative example of the design of a two bar truss is presented.

  18. Blue fingerprint in spectrum of cancer change of biotissues

    NASA Astrophysics Data System (ADS)

    Yermolenko, Sergey B.

    2010-11-01

    This paper follows to combine optical and biochemical techniques for identification the cell membrane transformation in the dynamic of growth and development of experimental solid tumour. It is researched that in all the cases the linear dichroism appears in biotissues (the human esophagus, the muscle tissue of rats, prostate tissue) with the cancer disease the magnitude of which depends on the type of the tissue and on the time of the cancer process development. As the linear dichroism is lacking for healthy tissues, then the obtained results can have diagnostic values with the purpose of detection and estimation of the stage of the cancer disease development.

  19. Spectropolarimetry features of biotissue's malignant changes

    NASA Astrophysics Data System (ADS)

    Gruia, I.; Yermolenko, S. B.; Gavrila, C.; Ivashko, P. V.; Gruia, M. I.

    2010-11-01

    This paper follows to combine optical and biochemical techniques for identification the cell membrane transformation in the dynamic of growth and development of experimental solid tumour. It is researched that in all the cases the linear dichroism appears in biotissues (the human esophagus, the muscle tissue of rats, prostate tissue) with the cancer disease the magnitude of which depends on the type of the tissue and on the time of the cancer process development. As the linear dichroism is lacking for healthy tissues, then the obtained results can have diagnostic values with the purpose of detection and estimation of the stage of the cancer disease development.

  20. Travel Demand Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Southworth, Frank; Garrow, Dr. Laurie

    This chapter describes the principal types of both passenger and freight demand models in use today, providing a brief history of model development supported by references to a number of popular texts on the subject, and directing the reader to papers covering some of the more recent technical developments in the area. Over the past half century a variety of methods have been used to estimate and forecast travel demands, drawing concepts from economic/utility maximization theory, transportation system optimization and spatial interaction theory, using and often combining solution techniques as varied as Box-Jenkins methods, non-linear multivariate regression, non-linear mathematical programming,more » and agent-based microsimulation.« less

  1. Nonlinear aeroacoustic characterization of Helmholtz resonators with a local-linear neuro-fuzzy network model

    NASA Astrophysics Data System (ADS)

    Förner, K.; Polifke, W.

    2017-10-01

    The nonlinear acoustic behavior of Helmholtz resonators is characterized by a data-based reduced-order model, which is obtained by a combination of high-resolution CFD simulation and system identification. It is shown that even in the nonlinear regime, a linear model is capable of describing the reflection behavior at a particular amplitude with quantitative accuracy. This observation motivates to choose a local-linear model structure for this study, which consists of a network of parallel linear submodels. A so-called fuzzy-neuron layer distributes the input signal over the linear submodels, depending on the root mean square of the particle velocity at the resonator surface. The resulting model structure is referred to as an local-linear neuro-fuzzy network. System identification techniques are used to estimate the free parameters of this model from training data. The training data are generated by CFD simulations of the resonator, with persistent acoustic excitation over a wide range of frequencies and sound pressure levels. The estimated nonlinear, reduced-order models show good agreement with CFD and experimental data over a wide range of amplitudes for several test cases.

  2. Nonlinear secret image sharing scheme.

    PubMed

    Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young

    2014-01-01

    Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively.

  3. Nonlinear Secret Image Sharing Scheme

    PubMed Central

    Shin, Sang-Ho; Yoo, Kee-Young

    2014-01-01

    Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2⁡m⌉ bit-per-pixel (bpp), respectively. PMID:25140334

  4. Parallel SOR methods with a parabolic-diffusion acceleration technique for solving an unstructured-grid Poisson equation on 3D arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Zapata, M. A. Uh; Van Bang, D. Pham; Nguyen, K. D.

    2016-05-01

    This paper presents a parallel algorithm for the finite-volume discretisation of the Poisson equation on three-dimensional arbitrary geometries. The proposed method is formulated by using a 2D horizontal block domain decomposition and interprocessor data communication techniques with message passing interface. The horizontal unstructured-grid cells are reordered according to the neighbouring relations and decomposed into blocks using a load-balanced distribution to give all processors an equal amount of elements. In this algorithm, two parallel successive over-relaxation methods are presented: a multi-colour ordering technique for unstructured grids based on distributed memory and a block method using reordering index following similar ideas of the partitioning for structured grids. In all cases, the parallel algorithms are implemented with a combination of an acceleration iterative solver. This solver is based on a parabolic-diffusion equation introduced to obtain faster solutions of the linear systems arising from the discretisation. Numerical results are given to evaluate the performances of the methods showing speedups better than linear.

  5. A spectral reflectance estimation technique using multispectral data from the Viking lander camera

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Huck, F. O.

    1976-01-01

    A technique is formulated for constructing spectral reflectance curve estimates from multispectral data obtained with the Viking lander camera. The multispectral data are limited to six spectral channels in the wavelength range from 0.4 to 1.1 micrometers and most of these channels exhibit appreciable out-of-band response. The output of each channel is expressed as a linear (integral) function of the (known) solar irradiance, atmospheric transmittance, and camera spectral responsivity and the (unknown) spectral responsivity and the (unknown) spectral reflectance. This produces six equations which are used to determine the coefficients in a representation of the spectral reflectance as a linear combination of known basis functions. Natural cubic spline reflectance estimates are produced for a variety of materials that can be reasonably expected to occur on Mars. In each case the dominant reflectance features are accurately reproduced, but small period features are lost due to the limited number of channels. This technique may be a valuable aid in selecting the number of spectral channels and their responsivity shapes when designing a multispectral imaging system.

  6. Globally convergent techniques in nonlinear Newton-Krylov

    NASA Technical Reports Server (NTRS)

    Brown, Peter N.; Saad, Youcef

    1989-01-01

    Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.

  7. Opto-electronic characterization of third-generation solar cells

    PubMed Central

    Jenatsch, Sandra

    2018-01-01

    Abstract We present an overview of opto-electronic characterization techniques for solar cells including light-induced charge extraction by linearly increasing voltage, impedance spectroscopy, transient photovoltage, charge extraction and more. Guidelines for the interpretation of experimental results are derived based on charge drift-diffusion simulations of solar cells with common performance limitations. It is investigated how nonidealities like charge injection barriers, traps and low mobilities among others manifest themselves in each of the studied cell characterization techniques. Moreover, comprehensive parameter extraction for an organic bulk-heterojunction solar cell comprising PCDTBT:PC70BM is demonstrated. The simulations reproduce measured results of 9 different experimental techniques. Parameter correlation is minimized due to the combination of various techniques. Thereby a route to comprehensive and accurate parameter extraction is identified. PMID:29707069

  8. Analysis of lithology: Vegetation mixes in multispectral images

    NASA Technical Reports Server (NTRS)

    Adams, J. B.; Smith, M.; Adams, J. D.

    1982-01-01

    Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.

  9. LPV Controller Interpolation for Improved Gain-Scheduling Control Performance

    NASA Technical Reports Server (NTRS)

    Wu, Fen; Kim, SungWan

    2002-01-01

    In this paper, a new gain-scheduling control design approach is proposed by combining LPV (linear parameter-varying) control theory with interpolation techniques. The improvement of gain-scheduled controllers can be achieved from local synthesis of Lyapunov functions and continuous construction of a global Lyapunov function by interpolation. It has been shown that this combined LPV control design scheme is capable of improving closed-loop performance derived from local performance improvement. The gain of the LPV controller will also change continuously across parameter space. The advantages of the newly proposed LPV control is demonstrated through a detailed AMB controller design example.

  10. Serenity: A subsystem quantum chemistry program.

    PubMed

    Unsleber, Jan P; Dresselhaus, Thomas; Klahr, Kevin; Schnieders, David; Böckers, Michael; Barton, Dennis; Neugebauer, Johannes

    2018-05-15

    We present the new quantum chemistry program Serenity. It implements a wide variety of functionalities with a focus on subsystem methodology. The modular code structure in combination with publicly available external tools and particular design concepts ensures extensibility and robustness with a focus on the needs of a subsystem program. Several important features of the program are exemplified with sample calculations with subsystem density-functional theory, potential reconstruction techniques, a projection-based embedding approach and combinations thereof with geometry optimization, semi-numerical frequency calculations and linear-response time-dependent density-functional theory. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  11. Sufficient Dimension Reduction for Longitudinally Measured Predictors

    PubMed Central

    Pfeiffer, Ruth M.; Forzani, Liliana; Bura, Efstathia

    2013-01-01

    We propose a method to combine several predictors (markers) that are measured repeatedly over time into a composite marker score without assuming a model and only requiring a mild condition on the predictor distribution. Assuming that the first and second moments of the predictors can be decomposed into a time and a marker component via a Kronecker product structure, that accommodates the longitudinal nature of the predictors, we develop first moment sufficient dimension reduction techniques to replace the original markers with linear transformations that contain sufficient information for the regression of the predictors on the outcome. These linear combinations can then be combined into a score that has better predictive performance than the score built under a general model that ignores the longitudinal structure of the data. Our methods can be applied to either continuous or categorical outcome measures. In simulations we focus on binary outcomes and show that our method outperforms existing alternatives using the AUC, the area under the receiver-operator characteristics (ROC) curve, as a summary measure of the discriminatory ability of a single continuous diagnostic marker for binary disease outcomes. PMID:22161635

  12. Unequal-Arm Interferometry and Ranging in Space

    NASA Technical Reports Server (NTRS)

    Tinto, Massimo

    2005-01-01

    Space-borne interferometric gravitational wave detectors, sensitive in the low-frequency (millihertz) band, will fly in the next decade. In these detectors the spacecraft-to-spacecraft light-traveltimes will necessarily be unequal, time-varying, and (due to aberration) have different time delays on up- and down-links. By using knowledge of the inter-spacecraft light-travel-times and their time evolution it is possible to cancel in post-processing the otherwise dominant laser phase noise and obtain a variety of interferometric data combinations sensitive to gravitational radiation. This technique, which has been named Time-Delay Interferometry (TDI), can be implemented with constellations of three or more formation-flying spacecraft that coherently track each other. As an example application we consider the Laser Interferometer Space Antenna (LISA) mission and show that TDI combinations can be synthesized by properly time-shifting and linearly combining the phase measurements performed on board the three spacecraft. Since TDI exactly suppresses the laser noises when the delays coincide with the light-travel-times, we then show that TDI can also be used for estimating the time-delays needed for its implementation. This is done by performing a post-processing non-linear minimization procedure, which provides an effective, powerful, and simple way for making measurements of the inter-spacecraft light-travel-times. This processing technique, named Time-Delay Interferometric Ranging (TDIR), is highly accurate in estimating the time-delays and allows TDI to be successfully implemented without the need of a dedicated ranging subsystem.

  13. SC-GRAPPA: Self-constraint noniterative GRAPPA reconstruction with closed-form solution.

    PubMed

    Ding, Yu; Xue, Hui; Ahmad, Rizwan; Ting, Samuel T; Simonetti, Orlando P

    2012-12-01

    Parallel MRI (pMRI) reconstruction techniques are commonly used to reduce scan time by undersampling the k-space data. GRAPPA, a k-space based pMRI technique, is widely used clinically because of its robustness. In GRAPPA, the missing k-space data are estimated by solving a set of linear equations; however, this set of equations does not take advantage of the correlations within the missing k-space data. All k-space data in a neighborhood acquired from a phased-array coil are correlated. The correlation can be estimated easily as a self-constraint condition, and formulated as an extra set of linear equations to improve the performance of GRAPPA. The authors propose a modified k-space based pMRI technique called self-constraint GRAPPA (SC-GRAPPA) which combines the linear equations of GRAPPA with these extra equations to solve for the missing k-space data. Since SC-GRAPPA utilizes a least-squares solution of the linear equations, it has a closed-form solution that does not require an iterative solver. The SC-GRAPPA equation was derived by incorporating GRAPPA as a prior estimate. SC-GRAPPA was tested in a uniform phantom and two normal volunteers. MR real-time cardiac cine images with acceleration rate 5 and 6 were reconstructed using GRAPPA and SC-GRAPPA. SC-GRAPPA showed a significantly lower artifact level, and a greater than 10% overall signal-to-noise ratio (SNR) gain over GRAPPA, with more significant SNR gain observed in low-SNR regions of the images. SC-GRAPPA offers improved pMRI reconstruction, and is expected to benefit clinical imaging applications in the future.

  14. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  15. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    NASA Astrophysics Data System (ADS)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  16. A Hybrid Method for Opinion Finding Task (KUNLP at TREC 2008 Blog Track)

    DTIC Science & Technology

    2008-11-01

    retrieve relevant documents. For the Opinion Retrieval subtask, we propose a hybrid model of lexicon-based approach and machine learning approach for...estimating and ranking the opinionated documents. For the Polarized Opinion Retrieval subtask, we employ machine learning for predicting the polarity...and linear combination technique for ranking polar documents. The hybrid model which utilize both lexicon-based approach and machine learning approach

  17. Nulling Hall-Effect Current-Measuring Circuit

    NASA Technical Reports Server (NTRS)

    Sullender, Craig C.; Vazquez, Juan M.; Berru, Robert I.

    1993-01-01

    Circuit measures electrical current via combination of Hall-effect-sensing and magnetic-field-nulling techniques. Known current generated by feedback circuit adjusted until it causes cancellation or near cancellation of magnetic field produced in toroidal ferrite core by current measured. Remaining magnetic field measured by Hall-effect sensor. Circuit puts out analog signal and digital signal proportional to current measured. Accuracy of measurement does not depend on linearity of sensing components.

  18. Application and Miniaturization of Linear and Nonlinear Raman Microscopy for Biomedical Imaging

    NASA Astrophysics Data System (ADS)

    Mittal, Richa

    Current diagnostics for several disorders rely on surgical biopsy or evaluation of ex vivo bodily fluids, which have numerous drawbacks. We evaluated the potential for vibrational techniques (both linear and nonlinear Raman) as a reliable and noninvasive diagnostic tool. Raman spectroscopy is an optical technique for molecular analysis that has been used extensively in various biomedical applications. Based on demonstrated capabilities of Raman spectroscopy we evaluated the potential of the technique for providing a noninvasive diagnosis of mucopolysaccharidosis (MPS). These studies show that Raman spectroscopy can detect subtle changes in tissue biochemistry. In applications where sub-micrometer visualization of tissue compositional change is required, a transition from spectroscopy to high quality imaging is necessary. Nonlinear vibrational microscopy is sensitive to the same molecular vibrations as linear Raman, but features fast imaging capabilities. Coherent Raman scattering when combined with other nonlinear optical (NLO) techniques (like two-photon excited fluorescence and second harmonic generation) forms a collection of advanced optical techniques that provide noninvasive chemical contrast at submicron resolution. This capability to examine tissues without external molecular agents is driving the NLO approach towards clinical applications. However, the unique imaging capabilities of NLO microscopy are accompanied by complex instrument requirements. Clinical examination requires portable imaging systems for rapid inspection of tissues. Optical components utilized in NLO microscopy would then need substantial miniaturization and optimization to enable in vivo use. The challenges in designing compact microscope objective lenses and laser beam scanning mechanisms are discussed. The development of multimodal NLO probes for imaging oral cavity tissue is presented. Our prototype has been examined for ex vivo tissue imaging based on intrinsic fluorescence and SHG contrast. These studies show a potential for multiphoton compact probes to be used for real time imaging in the clinic.

  19. Load compensation in a lean burn natural gas vehicle

    NASA Astrophysics Data System (ADS)

    Gangopadhyay, Anupam

    A new multivariable PI tuning technique is developed in this research that is primarily developed for regulation purposes. Design guidelines are developed based on closed-loop stability. The new multivariable design is applied in a natural gas vehicle to combine idle and A/F ratio control loops. This results in better recovery during low idle operation of a vehicle under external step torques. A powertrain model of a natural gas engine is developed and validated for steady-state and transient operation. The nonlinear model has three states: engine speed, intake manifold pressure and fuel fraction in the intake manifold. The model includes the effect of fuel partial pressure in the intake manifold filling and emptying dynamics. Due to the inclusion of fuel fraction as a state, fuel flow rate into the cylinders is also accurately modeled. A linear system identification is performed on the nonlinear model. The linear model structure is predicted analytically from the nonlinear model and the coefficients of the predicted transfer function are shown to be functions of key physical parameters in the plant. Simulations of linear system and model parameter identification is shown to converge to the predicted values of the model coefficients. The multivariable controller developed in this research could be designed in an algebraic fashion once the plant model is known. It is thus possible to implement the multivariable PI design in an adaptive fashion combining the controller with identified plant model on-line. This will result in a self-tuning regulator (STR) type controller where the underlying design criteria is the multivariable tuning technique designed in this research.

  20. An SVM-based solution for fault detection in wind turbines.

    PubMed

    Santos, Pedro; Villa, Luisa F; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-03-09

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets.

  1. Numerical techniques for solving nonlinear instability problems in smokeless tactical solid rocket motors. [finite difference technique

    NASA Technical Reports Server (NTRS)

    Baum, J. D.; Levine, J. N.

    1980-01-01

    The selection of a satisfactory numerical method for calculating the propagation of steep fronted shock life waveforms in a solid rocket motor combustion chamber is discussed. A number of different numerical schemes were evaluated by comparing the results obtained for three problems: the shock tube problems; the linear wave equation, and nonlinear wave propagation in a closed tube. The most promising method--a combination of the Lax-Wendroff, Hybrid and Artificial Compression techniques, was incorporated into an existing nonlinear instability program. The capability of the modified program to treat steep fronted wave instabilities in low smoke tactical motors was verified by solving a number of motor test cases with disturbance amplitudes as high as 80% of the mean pressure.

  2. Combined chamber-tower approach: Using eddy covariance measurements to cross-validate carbon fluxes modeled from manual chamber campaigns

    NASA Astrophysics Data System (ADS)

    Brümmer, C.; Moffat, A. M.; Huth, V.; Augustin, J.; Herbst, M.; Kutsch, W. L.

    2016-12-01

    Manual carbon dioxide flux measurements with closed chambers at scheduled campaigns are a versatile method to study management effects at small scales in multiple-plot experiments. The eddy covariance technique has the advantage of quasi-continuous measurements but requires large homogeneous areas of a few hectares. To evaluate the uncertainties associated with interpolating from individual campaigns to the whole vegetation period, we installed both techniques at an agricultural site in Northern Germany. The presented comparison covers two cropping seasons, winter oilseed rape in 2012/13 and winter wheat in 2013/14. Modeling half-hourly carbon fluxes from campaigns is commonly performed based on non-linear regressions for the light response and respiration. The daily averages of net CO2 modeled from chamber data deviated from eddy covariance measurements in the range of ± 5 g C m-2 day-1. To understand the observed differences and to disentangle the effects, we performed four additional setups (expert versus default settings of the non-linear regressions based algorithm, purely empirical modeling with artificial neural networks versus non-linear regressions, cross-validating using eddy covariance measurements as campaign fluxes, weekly versus monthly scheduling of campaigns) to model the half-hourly carbon fluxes for the whole vegetation period. The good agreement of the seasonal course of net CO2 at plot and field scale for our agricultural site demonstrates that both techniques are robust and yield consistent results at seasonal time scale even for a managed ecosystem with high temporal dynamics in the fluxes. This allows combining the respective advantages of factorial experiments at plot scale with dense time series data at field scale. Furthermore, the information from the quasi-continuous eddy covariance measurements can be used to derive vegetation proxies to support the interpolation of carbon fluxes in-between the manual chamber campaigns.

  3. Input/output properties of the lateral vestibular nucleus

    NASA Technical Reports Server (NTRS)

    Boyle, R.; Bush, G.; Ehsanian, R.

    2004-01-01

    This article is a review of work in three species, squirrel monkey, cat, and rat studying the inputs and outputs from the lateral vestibular nucleus (LVN). Different electrophysiological shock paradigms were used to determine the synaptic inputs derived from thick to thin diameter vestibular nerve afferents. Angular and linear mechanical stimulations were used to activate and study the combined and individual contribution of inner ear organs and neck afferents. The spatio-temporal properties of LVN neurons in the decerebrated rat were studied in response to dynamic acceleration inputs using sinusoidal linear translation in the horizontal head plane. Outputs were evaluated using antidromic identification techniques and identified LVN neurons were intracellularly injected with biocytin and their morphology studied.

  4. Self-aligned grating couplers on template-stripped metal pyramids via nanostencil lithography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klemme, Daniel J.; Johnson, Timothy W.; Mohr, Daniel A.

    2016-05-23

    We combine nanostencil lithography and template stripping to create self-aligned patterns about the apex of ultrasmooth metal pyramids with high throughput. Three-dimensional patterns such as spiral and asymmetric linear gratings, which can couple incident light into a hot spot at the tip, are presented as examples of this fabrication method. Computer simulations demonstrate that spiral and linear diffraction grating patterns are both effective at coupling light to the tip. The self-aligned stencil lithography technique can be useful for integrating plasmonic couplers with sharp metallic tips for applications such as near-field optical spectroscopy, tip-based optical trapping, plasmonic sensing, and heat-assisted magneticmore » recording.« less

  5. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.

  6. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  7. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  8. Adaptive neuro fuzzy inference system-based power estimation method for CMOS VLSI circuits

    NASA Astrophysics Data System (ADS)

    Vellingiri, Govindaraj; Jayabalan, Ramesh

    2018-03-01

    Recent advancements in very large scale integration (VLSI) technologies have made it feasible to integrate millions of transistors on a single chip. This greatly increases the circuit complexity and hence there is a growing need for less-tedious and low-cost power estimation techniques. The proposed work employs Back-Propagation Neural Network (BPNN) and Adaptive Neuro Fuzzy Inference System (ANFIS), which are capable of estimating the power precisely for the complementary metal oxide semiconductor (CMOS) VLSI circuits, without requiring any knowledge on circuit structure and interconnections. The ANFIS to power estimation application is relatively new. Power estimation using ANFIS is carried out by creating initial FIS modes using hybrid optimisation and back-propagation (BP) techniques employing constant and linear methods. It is inferred that ANFIS with the hybrid optimisation technique employing the linear method produces better results in terms of testing error that varies from 0% to 0.86% when compared to BPNN as it takes the initial fuzzy model and tunes it by means of a hybrid technique combining gradient descent BP and mean least-squares optimisation algorithms. ANFIS is the best suited for power estimation application with a low RMSE of 0.0002075 and a high coefficient of determination (R) of 0.99961.

  9. Accelerated Slice Encoding for Metal Artifact Correction

    PubMed Central

    Hargreaves, Brian A.; Chen, Weitian; Lu, Wenmiao; Alley, Marcus T.; Gold, Garry E.; Brau, Anja C. S.; Pauly, John M.; Pauly, Kim Butts

    2010-01-01

    Purpose To demonstrate accelerated imaging with artifact reduction near metallic implants and different contrast mechanisms. Materials and Methods Slice-encoding for metal artifact correction (SEMAC) is a modified spin echo sequence that uses view-angle tilting and slice-direction phase encoding to correct both in-plane and through-plane artifacts. Standard spin echo trains and short-TI inversion recovery (STIR) allow efficient PD-weighted imaging with optional fat suppression. A completely linear reconstruction allows incorporation of parallel imaging and partial Fourier imaging. The SNR effects of all reconstructions were quantified in one subject. 10 subjects with different metallic implants were scanned using SEMAC protocols, all with scan times below 11 minutes, as well as with standard spin echo methods. Results The SNR using standard acceleration techniques is unaffected by the linear SEMAC reconstruction. In all cases with implants, accelerated SEMAC significantly reduced artifacts compared with standard imaging techniques, with no additional artifacts from acceleration techniques. The use of different contrast mechanisms allowed differentiation of fluid from other structures in several subjects. Conclusion SEMAC imaging can be combined with standard echo-train imaging, parallel imaging, partial-Fourier imaging and inversion recovery techniques to offer flexible image contrast with a dramatic reduction of metal-induced artifacts in scan times under 11 minutes. PMID:20373445

  10. Accelerated slice encoding for metal artifact correction.

    PubMed

    Hargreaves, Brian A; Chen, Weitian; Lu, Wenmiao; Alley, Marcus T; Gold, Garry E; Brau, Anja C S; Pauly, John M; Pauly, Kim Butts

    2010-04-01

    To demonstrate accelerated imaging with both artifact reduction and different contrast mechanisms near metallic implants. Slice-encoding for metal artifact correction (SEMAC) is a modified spin echo sequence that uses view-angle tilting and slice-direction phase encoding to correct both in-plane and through-plane artifacts. Standard spin echo trains and short-TI inversion recovery (STIR) allow efficient PD-weighted imaging with optional fat suppression. A completely linear reconstruction allows incorporation of parallel imaging and partial Fourier imaging. The signal-to-noise ratio (SNR) effects of all reconstructions were quantified in one subject. Ten subjects with different metallic implants were scanned using SEMAC protocols, all with scan times below 11 minutes, as well as with standard spin echo methods. The SNR using standard acceleration techniques is unaffected by the linear SEMAC reconstruction. In all cases with implants, accelerated SEMAC significantly reduced artifacts compared with standard imaging techniques, with no additional artifacts from acceleration techniques. The use of different contrast mechanisms allowed differentiation of fluid from other structures in several subjects. SEMAC imaging can be combined with standard echo-train imaging, parallel imaging, partial-Fourier imaging, and inversion recovery techniques to offer flexible image contrast with a dramatic reduction of metal-induced artifacts in scan times under 11 minutes. (c) 2010 Wiley-Liss, Inc.

  11. Combining textual and visual information for image retrieval in the medical domain.

    PubMed

    Gkoufas, Yiannis; Morou, Anna; Kalamboukis, Theodore

    2011-01-01

    In this article we have assembled the experience obtained from our participation in the imageCLEF evaluation task over the past two years. Exploitation on the use of linear combinations for image retrieval has been attempted by combining visual and textual sources of images. From our experiments we conclude that a mixed retrieval technique that applies both textual and visual retrieval in an interchangeably repeated manner improves the performance while overcoming the scalability limitations of visual retrieval. In particular, the mean average precision (MAP) has increased from 0.01 to 0.15 and 0.087 for 2009 and 2010 data, respectively, when content-based image retrieval (CBIR) is performed on the top 1000 results from textual retrieval based on natural language processing (NLP).

  12. Classification of smoke tainted wines using mid-infrared spectroscopy and chemometrics.

    PubMed

    Fudge, Anthea L; Wilkinson, Kerry L; Ristic, Renata; Cozzolino, Daniel

    2012-01-11

    In this study, the suitability of mid-infrared (MIR) spectroscopy, combined with principal component analysis (PCA) and linear discriminant analysis (LDA), was evaluated as a rapid analytical technique to identify smoke tainted wines. Control (i.e., unsmoked) and smoke-affected wines (260 in total) from experimental and commercial sources were analyzed by MIR spectroscopy and chemometrics. The concentrations of guaiacol and 4-methylguaiacol were also determined using gas chromatography-mass spectrometry (GC-MS), as markers of smoke taint. LDA models correctly classified 61% of control wines and 70% of smoke-affected wines. Classification rates were found to be influenced by the extent of smoke taint (based on GC-MS and informal sensory assessment), as well as qualitative differences in wine composition due to grape variety and oak maturation. Overall, the potential application of MIR spectroscopy combined with chemometrics as a rapid analytical technique for screening smoke-affected wines was demonstrated.

  13. Reproducibility of EEG-fMRI results in a patient with fixation-off sensitivity.

    PubMed

    Formaggio, Emanuela; Storti, Silvia Francesca; Galazzo, Ilaria Boscolo; Bongiovanni, Luigi Giuseppe; Cerini, Roberto; Fiaschi, Antonio; Manganotti, Paolo

    2014-07-01

    Blood oxygenation level-dependent (BOLD) activation associated with interictal epileptiform discharges in a patient with fixation-off sensitivity (FOS) was studied using a combined electroencephalography-functional magnetic resonance imaging (EEG-fMRI) technique. An automatic approach for combined EEG-fMRI analysis and a subject-specific hemodynamic response function was used to improve general linear model analysis of the fMRI data. The EEG showed the typical features of FOS, with continuous epileptiform discharges during elimination of central vision by eye opening and closing and fixation; modification of this pattern was clearly visible and recognizable. During all 3 recording sessions EEG-fMRI activations indicated a BOLD signal decrease related to epileptiform activity in the parietal areas. This study can further our understanding of this EEG phenomenon and can provide some insight into the reliability of the EEG-fMRI technique in localizing the irritative zone.

  14. Multi-time-step ahead daily and hourly intermittent reservoir inflow prediction by artificial intelligent techniques using lumped and distributed data

    NASA Astrophysics Data System (ADS)

    Jothiprakash, V.; Magar, R. B.

    2012-07-01

    SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.

  15. Efficient linear phase contrast in scanning transmission electron microscopy with matched illumination and detector interferometry

    DOE PAGES

    Ophus, Colin; Ciston, Jim; Pierce, Jordan; ...

    2016-02-29

    The ability to image light elements in soft matter at atomic resolution enables unprecedented insight into the structure and properties of molecular heterostructures and beam-sensitive nanomaterials. In this study, we introduce a scanning transmission electron microscopy technique combining a pre-specimen phase plate designed to produce a probe with structured phase with a high-speed direct electron detector to generate nearly linear contrast images with high efficiency. We demonstrate this method by using both experiment and simulation to simultaneously image the atomic-scale structure of weakly scattering amorphous carbon and strongly scattering gold nanoparticles. Our method demonstrates strong contrast for both materials, makingmore » it a promising candidate for structural determination of heterogeneous soft/hard matter samples even at low electron doses comparable to traditional phase-contrast transmission electron microscopy. Ultimately, simulated images demonstrate the extension of this technique to the challenging problem of structural determination of biological material at the surface of inorganic crystals.« less

  16. Tracking performance under time sharing conditions with a digit processing task: A feedback control theory analysis. [attention sharing effect on operator performance

    NASA Technical Reports Server (NTRS)

    Gopher, D.; Wickens, C. D.

    1975-01-01

    A one dimensional compensatory tracking task and a digit processing reaction time task were combined in a three phase experiment designed to investigate tracking performance in time sharing. Adaptive techniques, elaborate feedback devices, and on line standardization procedures were used to adjust task difficulty to the ability of each individual subject and manipulate time sharing demands. Feedback control analysis techniques were employed in the description of tracking performance. The experimental results show that when the dynamics of a system are constrained, in such a manner that man machine system stability is no longer a major concern of the operator, he tends to adopt a first order control describing function, even with tracking systems of higher order. Attention diversion to a concurrent task leads to an increase in remnant level, or nonlinear power. This decrease in linearity is reflected both in the output magnitude spectra of the subjects, and in the linear fit of the amplitude ratio functions.

  17. Efficient linear phase contrast in scanning transmission electron microscopy with matched illumination and detector interferometry.

    PubMed

    Ophus, Colin; Ciston, Jim; Pierce, Jordan; Harvey, Tyler R; Chess, Jordan; McMorran, Benjamin J; Czarnik, Cory; Rose, Harald H; Ercius, Peter

    2016-02-29

    The ability to image light elements in soft matter at atomic resolution enables unprecedented insight into the structure and properties of molecular heterostructures and beam-sensitive nanomaterials. In this study, we introduce a scanning transmission electron microscopy technique combining a pre-specimen phase plate designed to produce a probe with structured phase with a high-speed direct electron detector to generate nearly linear contrast images with high efficiency. We demonstrate this method by using both experiment and simulation to simultaneously image the atomic-scale structure of weakly scattering amorphous carbon and strongly scattering gold nanoparticles. Our method demonstrates strong contrast for both materials, making it a promising candidate for structural determination of heterogeneous soft/hard matter samples even at low electron doses comparable to traditional phase-contrast transmission electron microscopy. Simulated images demonstrate the extension of this technique to the challenging problem of structural determination of biological material at the surface of inorganic crystals.

  18. High fidelity, radiation tolerant analog-to-digital converters

    NASA Technical Reports Server (NTRS)

    Wang, Charles Chang-I (Inventor); Linscott, Ivan Richard (Inventor); Inan, Umran S. (Inventor)

    2012-01-01

    Techniques for an analog-to-digital converter (ADC) using pipeline architecture includes a linearization technique for a spurious-free dynamic range (SFDR) over 80 deciBels. In some embodiments, sampling rates exceed a megahertz. According to a second approach, a switched-capacitor circuit is configured for correct operation in a high radiation environment. In one embodiment, the combination yields high fidelity ADC (>88 deciBel SFDR) while sampling at 5 megahertz sampling rates and consuming <60 milliWatts. Furthermore, even though it is manufactured in a commercial 0.25-.mu.m CMOS technology (1 .mu.m=12.sup.-6 meters), it maintains this performance in harsh radiation environments. Specifically, the stated performance is sustained through a highest tested 2 megarad(Si) total dose, and the ADC displays no latchup up to a highest tested linear energy transfer of 63 million electron Volts square centimeters per milligram at elevated temperature (131 degrees C.) and supply (2.7 Volts, versus 2.5 Volts nominal).

  19. Programmable growth of branched silicon nanowires using a focused ion beam.

    PubMed

    Jun, Kimin; Jacobson, Joseph M

    2010-08-11

    Although significant progress has been made in being able to spatially define the position of material layers in vapor-liquid-solid (VLS) grown nanowires, less work has been carried out in deterministically defining the positions of nanowire branching points to facilitate more complicated structures beyond simple 1D wires. Work to date has focused on the growth of randomly branched nanowire structures. Here we develop a means for programmably designating nanowire branching points by means of focused ion beam-defined VLS catalytic points. This technique is repeatable without losing fidelity allowing multiple rounds of branching point definition followed by branch growth resulting in complex structures. The single crystal nature of this approach allows us to describe resulting structures with linear combinations of base vectors in three-dimensional (3D) space. Finally, by etching the resulting 3D defined wire structures branched nanotubes were fabricated with interconnected nanochannels inside. We believe that the techniques developed here should comprise a useful tool for extending linear VLS nanowire growth to generalized 3D wire structures.

  20. Efficient linear phase contrast in scanning transmission electron microscopy with matched illumination and detector interferometry

    PubMed Central

    Ophus, Colin; Ciston, Jim; Pierce, Jordan; Harvey, Tyler R.; Chess, Jordan; McMorran, Benjamin J.; Czarnik, Cory; Rose, Harald H.; Ercius, Peter

    2016-01-01

    The ability to image light elements in soft matter at atomic resolution enables unprecedented insight into the structure and properties of molecular heterostructures and beam-sensitive nanomaterials. In this study, we introduce a scanning transmission electron microscopy technique combining a pre-specimen phase plate designed to produce a probe with structured phase with a high-speed direct electron detector to generate nearly linear contrast images with high efficiency. We demonstrate this method by using both experiment and simulation to simultaneously image the atomic-scale structure of weakly scattering amorphous carbon and strongly scattering gold nanoparticles. Our method demonstrates strong contrast for both materials, making it a promising candidate for structural determination of heterogeneous soft/hard matter samples even at low electron doses comparable to traditional phase-contrast transmission electron microscopy. Simulated images demonstrate the extension of this technique to the challenging problem of structural determination of biological material at the surface of inorganic crystals. PMID:26923483

  1. Experimental demonstration of deep frequency modulation interferometry.

    PubMed

    Isleif, Katharina-Sophie; Gerberding, Oliver; Schwarze, Thomas S; Mehmet, Moritz; Heinzel, Gerhard; Cervantes, Felipe Guzmán

    2016-01-25

    Experiments for space and ground-based gravitational wave detectors often require a large dynamic range interferometric position readout of test masses with 1 pm/√Hz precision over long time scales. Heterodyne interferometer schemes that achieve such precisions are available, but they require complex optical set-ups, limiting their scalability for multiple channels. This article presents the first experimental results on deep frequency modulation interferometry, a new technique that combines sinusoidal laser frequency modulation in unequal arm length interferometers with a non-linear fit algorithm. We have tested the technique in a Michelson and a Mach-Zehnder Interferometer topology, respectively, demonstrated continuous phase tracking of a moving mirror and achieved a performance equivalent to a displacement sensitivity of 250 pm/Hz at 1 mHz between the phase measurements of two photodetectors monitoring the same optical signal. By performing time series fitting of the extracted interference signals, we measured that the linearity of the laser frequency modulation is on the order of 2% for the laser source used.

  2. DOUBLE ENDOR with a linearly and a circularly polarized radiofrequency field

    NASA Astrophysics Data System (ADS)

    Schweiger, A.; Rudin, M.; Forrer, J.; Günthard, Hs. H.

    The combination of the two spectroscopical techniques, DOUBLE ENDOR and ENDOR with a circularly polarized radiofrequency field (CP-ENDOR), is described. with this new method, termed by the acronym CP-DOUBLE ENDOR, the selective induction of transitions of different types of nuclei and of different paramagnetic species allows a drastic reduction of the number of observed ENDOR lines. With this technique, analysis of hitherto not interpretable ENDOR spectra is often made possible. The experimental setup of the CP-DOUBLE ENDOR spectrometer is described. The advantage of using circularly polarized rf fields in DOUBLE ENDOR spectroscopy is illustrated by two applications on transition metal complexes in single crystals.

  3. Reduced basis technique for evaluating the sensitivity coefficients of the nonlinear tire response

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Tanner, John A.; Peters, Jeanne M.

    1992-01-01

    An efficient reduced-basis technique is proposed for calculating the sensitivity of nonlinear tire response to variations in the design variables. The tire is modeled using a 2-D, moderate rotation, laminated anisotropic shell theory, including the effects of variation in material and geometric parameters. The vector of structural response and its first-order and second-order sensitivity coefficients are each expressed as a linear combination of a small number of basis vectors. The effectiveness of the basis vectors used in approximating the sensitivity coefficients is demonstrated by a numerical example involving the Space Shuttle nose-gear tire, which is subjected to uniform inflation pressure.

  4. Optical pre-clinical diagnostics of the cervical tissues malignant changing

    NASA Astrophysics Data System (ADS)

    Yermolenko, Sergey; Voloshynskyi, Dmytro; Fedoruk, Olexander; Gruia, Ion; Zimnyakov, Dmitry

    2014-08-01

    This work is directed to the investigation of the scope of the technique of laser polarimetry of oncological changes of the human prostate and cervical tissues under the conditions of multiple scattering, which presents a more general and real experimental clinical situation. This study is combining polarimetry and spectropolarimetry techniques for identifying the changes of optical-geometrical structure in different kinds of biotissues with solid tumours. It is researched that a linear dichroism appears in biotissues (human esophagus, muscle tissue of rats, human prostate tissue, cervical smear) with cancer diseases, magnitude of which depends on the type of the tissue and on the time of cancer process development.

  5. Chaos as an intermittently forced linear system.

    PubMed

    Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kaiser, Eurika; Kutz, J Nathan

    2017-05-30

    Understanding the interplay of order and disorder in chaos is a central challenge in modern quantitative science. Approximate linear representations of nonlinear dynamics have long been sought, driving considerable interest in Koopman theory. We present a universal, data-driven decomposition of chaos as an intermittently forced linear system. This work combines delay embedding and Koopman theory to decompose chaotic dynamics into a linear model in the leading delay coordinates with forcing by low-energy delay coordinates; this is called the Hankel alternative view of Koopman (HAVOK) analysis. This analysis is applied to the Lorenz system and real-world examples including Earth's magnetic field reversal and measles outbreaks. In each case, forcing statistics are non-Gaussian, with long tails corresponding to rare intermittent forcing that precedes switching and bursting phenomena. The forcing activity demarcates coherent phase space regions where the dynamics are approximately linear from those that are strongly nonlinear.The huge amount of data generated in fields like neuroscience or finance calls for effective strategies that mine data to reveal underlying dynamics. Here Brunton et al.develop a data-driven technique to analyze chaotic systems and predict their dynamics in terms of a forced linear model.

  6. Solid Freeform Fabrication Symposium Proceedings Held in Austin, Texas on August 9-11, 1993

    DTIC Science & Technology

    1993-09-01

    between the accuracy and the size of the geometric description. Highly non-linear surfaces, such as those that comprise turbine blades , manifolds...flange. The fan blades were modeled using different surfacing techniques. Seven blades are then combined with the rotor to make the completed fan. Figure...successfully cast in aluminum, titanium , beryllium-copper, and stainless steel, with RMS surface finish as low as 1 micrometer, without any subsequent

  7. Advanced analysis technique for the evaluation of linear alternators and linear motors

    NASA Technical Reports Server (NTRS)

    Holliday, Jeffrey C.

    1995-01-01

    A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.

  8. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    PubMed

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi-objective design should stimulate its application within the field of (13)C-based metabolic flux analysis. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Recent applications of multivariate data analysis methods in the authentication of rice and the most analyzed parameters: A review.

    PubMed

    Maione, Camila; Barbosa, Rommel Melgaço

    2018-01-24

    Rice is one of the most important staple foods around the world. Authentication of rice is one of the most addressed concerns in the present literature, which includes recognition of its geographical origin and variety, certification of organic rice and many other issues. Good results have been achieved by multivariate data analysis and data mining techniques when combined with specific parameters for ascertaining authenticity and many other useful characteristics of rice, such as quality, yield and others. This paper brings a review of the recent research projects on discrimination and authentication of rice using multivariate data analysis and data mining techniques. We found that data obtained from image processing, molecular and atomic spectroscopy, elemental fingerprinting, genetic markers, molecular content and others are promising sources of information regarding geographical origin, variety and other aspects of rice, being widely used combined with multivariate data analysis techniques. Principal component analysis and linear discriminant analysis are the preferred methods, but several other data classification techniques such as support vector machines, artificial neural networks and others are also frequently present in some studies and show high performance for discrimination of rice.

  10. Autonomous Guidance of Agile Small-scale Rotorcraft

    NASA Technical Reports Server (NTRS)

    Mettler, Bernard; Feron, Eric

    2004-01-01

    This report describes a guidance system for agile vehicles based on a hybrid closed-loop model of the vehicle dynamics. The hybrid model represents the vehicle dynamics through a combination of linear-time-invariant control modes and pre-programmed, finite-duration maneuvers. This particular hybrid structure can be realized through a control system that combines trim controllers and a maneuvering control logic. The former enable precise trajectory tracking, and the latter enables trajectories at the edge of the vehicle capabilities. The closed-loop model is much simpler than the full vehicle equations of motion, yet it can capture a broad range of dynamic behaviors. It also supports a consistent link between the physical layer and the decision-making layer. The trajectory generation was formulated as an optimization problem using mixed-integer-linear-programming. The optimization is solved in a receding horizon fashion. Several techniques to improve the computational tractability were investigate. Simulation experiments using NASA Ames 'R-50 model show that this approach fully exploits the vehicle's agility.

  11. Linear combination of atomic orbitals calculation of the Auger neutralization rate of He{sup +} on Al(111) (100), and (110) surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valdes, Diego; Blanco, J.M.; Monreal, R.C.

    2005-06-15

    We develop a theory of the Auger neutralization rate of ions on solid surfaces in which the matrix elements for the transition are calculated by means of a linear combination of atomic orbitals technique. We apply the theory to the calculation of the Auger rate of He{sup +} on unreconstructed Al(111) (100), and (110) surfaces, assuming He{sup +} to approach these surfaces on high symmetry positions and compare them with the results of the jellium model. Although there are substantial differences between the Auger rates calculated with both kinds of approaches, those differences tend to compensate when evaluating the integralmore » along the ion trajectory and, consequently, are of minor influence in some physical magnitudes like the ion survival probability for perpendicular energies larger than 100 eV. We find that many atoms contribute to the Auger process and small effects of lateral corrugation are registered.« less

  12. A simplified computer program for the prediction of the linear stability behavior of liquid propellant combustors

    NASA Technical Reports Server (NTRS)

    Mitchell, C. E.; Eckert, K.

    1979-01-01

    A program for predicting the linear stability of liquid propellant rocket engines is presented. The underlying model assumptions and analytical steps necessary for understanding the program and its input and output are also given. The rocket engine is modeled as a right circular cylinder with an injector with a concentrated combustion zone, a nozzle, finite mean flow, and an acoustic admittance, or the sensitive time lag theory. The resulting partial differential equations are combined into two governing integral equations by the use of the Green's function method. These equations are solved using a successive approximation technique for the small amplitude (linear) case. The computational method used as well as the various user options available are discussed. Finally, a flow diagram, sample input and output for a typical application and a complete program listing for program MODULE are presented.

  13. Hysteresis in column systems

    NASA Astrophysics Data System (ADS)

    Ivanyi, P.; Ivanyi, A.

    2015-02-01

    In this paper one column of a telescopic construction of a bell tower is investigated. The hinges at the support of the column and at the connecting joint between the upper and lower columns are modelled with rotational springs. The characteristics of the springs are assumed to be non-linear and the hysteresis property of them is represented with the Preisach hysteresis model. The mass of the columns and the bell with the fly are concentrated to the top of the column. The tolling process is simulated with a cycling load. The elements of the column are considered completely rigid. The time iteration of the non-linear equations of the motion is evaluated by the Crank-Nicolson schema and the implemented non-linear hysteresis is handled by the fix-point technique. The numerical simulation of the dynamic system is carried out under different combination of soft, medium and hard hysteresis properties of hinges.

  14. Reinventing the Accelerator for the High Energy Frontier

    ScienceCinema

    Rosenzweig, James [UCLA, Los Angeles, California, United States

    2017-12-09

    The history of discovery in high-energy physics has been intimately connected with progress in methods of accelerating particles for the past 75 years. This remains true today, as the post-LHC era in particle physics will require significant innovation and investment in a superconducting linear collider. The choice of the linear collider as the next-generation discovery machine, and the selection of superconducting technology has rather suddenly thrown promising competing techniques -- such as very large hadron colliders, muon colliders, and high-field, high frequency linear colliders -- into the background. We discuss the state of such conventional options, and the likelihood of their eventual success. We then follow with a much longer view: a survey of a new, burgeoning frontier in high energy accelerators, where intense lasers, charged particle beams, and plasmas are all combined in a cross-disciplinary effort to reinvent the accelerator from its fundamental principles on up.

  15. Quantification of precipitation measurement discontinuity induced by wind shields on national gauges

    USGS Publications Warehouse

    Yang, Daqing; Goodison, Barry E.; Metcalfe, John R.; Louie, Paul; Leavesley, George H.; Emerson, Douglas G.; Hanson, Clayton L.; Golubev, Valentin S.; Elomaa, Esko; Gunther, Thilo; Pangburn, Timothy; Kang, Ersi; Milkovic, Janja

    1999-01-01

    Various combinations of wind shields and national precipitation gauges commonly used in countries of the northern hemisphere have been studied in this paper, using the combined intercomparison data collected at 14 sites during the World Meteorological Organization's (WMO) Solid Precipitation Measurement Intercomparison Project. The results show that wind shields improve gauge catch of precipitation, particularly for snow. Shielded gauges, on average, measure 20–70% more snow than unshielded gauges. Without a doubt, the use of wind shields on precipitation gauges has introduced a significant discontinuity into precipitation records, particularly in cold and windy regions. This discontinuity is not constant and it varies with wind speed, temperature, and precipitation type. Adjustment for this discontinuity is necessary to obtain homogenous precipitation data for climate change and hydrological studies. The relation of the relative catch ratio (RCR, ratio of measurements of shielded gauge to unshielded gauge) versus wind speed and temperature has been developed for Alter and Tretyakov wind shields. Strong linear relations between measurements of shielded gauge and unshielded gauge have also been found for different precipitation types. The linear relation does not fully take into account the varying effect of wind and temperature on gauge catch. Overadjustment by the linear relation may occur at those sites with lower wind speeds, and underadjustment may occur at those stations with higher wind speeds. The RCR technique is anticipated to be more applicable in a wide range of climate conditions. The RCR technique and the linear relation have been tested at selected WMO intercomparison stations, and reasonable agreement between the adjusted amounts and the shielded gauge measurements was obtained at most of the sites. Test application of the developed methodologies to a regional or national network is therefore recommended to further evaluate their applicability in different climate conditions. Significant increase of precipitation is expected due to the adjustment particularly in high latitudes and other cold regions. This will have a meaningful impact on climate variation and change analyses.

  16. Methodological concerns for determining power output in the jump squat.

    PubMed

    Cormie, Prue; Deane, Russell; McBride, Jeffrey M

    2007-05-01

    The purpose of this study was to investigate the validity of power measurement techniques during the jump squat (JS) utilizing various combinations of a force plate and linear position transducer (LPT) devices. Nine men with at least 6 months of prior resistance training experience participated in this acute investigation. One repetition maximums (1RM) in the squat were determined, followed by JS testing under 2 loading conditions (30% of 1RM [JS30] and 90% of 1RM [JS90]). Three different techniques were used simultaneously in data collection: (a) 1 linear position transducer (1-LPT); (b) 1 linear position transducer and a force plate (1-LPT + FP); and (c) 2 linear position transducers and a force place (2-LPT + FP). Vertical velocity-, force-, and power-time curves were calculated for each lift using these methodologies and were compared. Peak force and peak power were overestimated by 1-LPT in both JS30 and JS90 compared with 2-LPT + FP and 1-LPT + FP (p

  17. [Efficiency of combined methods of hemorroid treatment using hal-rar and laser destruction].

    PubMed

    Rodoman, G V; Kornev, L V; Shalaeva, T I; Malushenko, R N

    2017-01-01

    To develop the combined method of treatment of hemorrhoids with arterial ligation under Doppler control and laser destruction of internal and external hemorrhoids. The study included 100 patients with chronic hemorrhoids stage II and III. Combined method of HAL-laser was used in study group, HAL RAR-technique in control group 1 and closed hemorrhoidectomy with linear stapler in control group 2. Сomparative evaluation of results in both groups was performed. Combined method overcomes the drawbacks of traditional surgical treatment and limitations in external components elimination which are inherent for HAL-RAR. Moreover, it has a higher efficiency in treating of hemorrhoids stage II-III compared with HAL-RAR and is equally safe and well tolerable for patients. This method does not increase the risk of recurrence, reduces incidence of complications and time of disability.

  18. Design, test, and evaluation of three active flutter suppression controllers

    NASA Technical Reports Server (NTRS)

    Adams, William M., Jr.; Christhilf, David M.; Waszak, Martin R.; Mukhopadhyay, Vivek; Srinathkumar, S.

    1992-01-01

    Three control law design techniques for flutter suppression are presented. Each technique uses multiple control surfaces and/or sensors. The first method uses traditional tools (such as pole/zero loci and Nyquist diagrams) for producing a controller that has minimal complexity and which is sufficiently robust to handle plant uncertainty. The second procedure uses linear combinations of several accelerometer signals and dynamic compensation to synthesize the model rate of the critical mode for feedback to the distributed control surfaces. The third technique starts with a minimum-energy linear quadratic Gaussian controller, iteratively modifies intensity matrices corresponding to input and output noise, and applies controller order reduction to achieve a low-order, robust controller. The resulting designs were implemented digitally and tested subsonically on the active flexible wing wind-tunnel model in the Langley Transonic Dynamics Tunnel. Only the traditional pole/zero loci design was sufficiently robust to errors in the nominal plant to successfully suppress flutter during the test. The traditional pole/zero loci design provided simultaneous suppression of symmetric and antisymmetric flutter with a 24-percent increase in attainable dynamic pressure. Posttest analyses are shown which illustrate the problems encountered with the other laws.

  19. A Linear Relationship between Crystal Size and Fragment Binding Time Observed Crystallographically: Implications for Fragment Library Screening Using Acoustic Droplet Ejection

    PubMed Central

    Birone, Claire; Brown, Maria; Hernandez, Jesus; Neff, Sherry; Williams, Daniel; Allaire, Marc; Orville, Allen M.; Sweet, Robert M.; Soares, Alexei S.

    2014-01-01

    High throughput screening technologies such as acoustic droplet ejection (ADE) greatly increase the rate at which X-ray diffraction data can be acquired from crystals. One promising high throughput screening application of ADE is to rapidly combine protein crystals with fragment libraries. In this approach, each fragment soaks into a protein crystal either directly on data collection media or on a moving conveyor belt which then delivers the crystals to the X-ray beam. By simultaneously handling multiple crystals combined with fragment specimens, these techniques relax the automounter duty-cycle bottleneck that currently prevents optimal exploitation of third generation synchrotrons. Two factors limit the speed and scope of projects that are suitable for fragment screening using techniques such as ADE. Firstly, in applications where the high throughput screening apparatus is located inside the X-ray station (such as the conveyor belt system described above), the speed of data acquisition is limited by the time required for each fragment to soak into its protein crystal. Secondly, in applications where crystals are combined with fragments directly on data acquisition media (including both of the ADE methods described above), the maximum time that fragments have to soak into crystals is limited by evaporative dehydration of the protein crystals during the fragment soak. Here we demonstrate that both of these problems can be minimized by using small crystals, because the soak time required for a fragment hit to attain high occupancy depends approximately linearly on crystal size. PMID:24988328

  20. A linear relationship between crystal size and fragment binding time observed crystallographically: implications for fragment library screening using acoustic droplet ejection.

    PubMed

    Cole, Krystal; Roessler, Christian G; Mulé, Elizabeth A; Benson-Xu, Emma J; Mullen, Jeffrey D; Le, Benjamin A; Tieman, Alanna M; Birone, Claire; Brown, Maria; Hernandez, Jesus; Neff, Sherry; Williams, Daniel; Allaire, Marc; Orville, Allen M; Sweet, Robert M; Soares, Alexei S

    2014-01-01

    High throughput screening technologies such as acoustic droplet ejection (ADE) greatly increase the rate at which X-ray diffraction data can be acquired from crystals. One promising high throughput screening application of ADE is to rapidly combine protein crystals with fragment libraries. In this approach, each fragment soaks into a protein crystal either directly on data collection media or on a moving conveyor belt which then delivers the crystals to the X-ray beam. By simultaneously handling multiple crystals combined with fragment specimens, these techniques relax the automounter duty-cycle bottleneck that currently prevents optimal exploitation of third generation synchrotrons. Two factors limit the speed and scope of projects that are suitable for fragment screening using techniques such as ADE. Firstly, in applications where the high throughput screening apparatus is located inside the X-ray station (such as the conveyor belt system described above), the speed of data acquisition is limited by the time required for each fragment to soak into its protein crystal. Secondly, in applications where crystals are combined with fragments directly on data acquisition media (including both of the ADE methods described above), the maximum time that fragments have to soak into crystals is limited by evaporative dehydration of the protein crystals during the fragment soak. Here we demonstrate that both of these problems can be minimized by using small crystals, because the soak time required for a fragment hit to attain high occupancy depends approximately linearly on crystal size.

  1. Validity of linear measurements of the jaws using ultralow-dose MDCT and the iterative techniques of ASIR and MBIR.

    PubMed

    Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig

    2016-10-01

    To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.

  2. Assessing the impact of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping

    2014-05-01

    The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.

  3. Determination of stress intensity factors for interface cracks under mixed-mode loading

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.; Crews, John H., Jr.

    1992-01-01

    A simple technique was developed using conventional finite element analysis to determine stress intensity factors, K1 and K2, for interface cracks under mixed-mode loading. This technique involves the calculation of crack tip stresses using non-singular finite elements. These stresses are then combined and used in a linear regression procedure to calculate K1 and K2. The technique was demonstrated by calculating three different bimaterial combinations. For the normal loading case, the K's were within 2.6 percent of an exact solution. The normalized K's under shear loading were shown to be related to the normalized K's under normal loading. Based on these relations, a simple equation was derived for calculating K1 and K2 for mixed-mode loading from knowledge of the K's under normal loading. The equation was verified by computing the K's for a mixed-mode case with equal and normal shear loading. The correlation between exact and finite element solutions is within 3.7 percent. This study provides a simple procedure to compute K2/K1 ratio which has been used to characterize the stress state at the crack tip for various combinations of materials and loadings. Tests conducted over a range of K2/K1 ratios could be used to fully characterize interface fracture toughness.

  4. Efficient QoS-aware Service Composition

    NASA Astrophysics Data System (ADS)

    Alrifai, Mohammad; Risse, Thomas

    Web service composition requests are usually combined with endto-end QoS requirements, which are specified in terms of non-functional properties (e.g. response time, throughput and price). The goal of QoS-aware service composition is to find the best combination of services such that their aggregated QoS values meet these end-to-end requirements. Local selection techniques are very efficient but fail short in handling global QoS constraints. Global optimization techniques, on the other hand, can handle global constraints, but their poor performance render them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques for achieving a better performance. The proposed solution consists of two steps: first we use mixed integer linear programming (MILP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use local search to find the best web services that satisfy these local constraints. Unlike existing MILP-based global planning solutions, the size of the MILP model in our case is much smaller and independent on the number of available services, yields faster computation and more scalability. Preliminary experiments have been conducted to evaluate the performance of the proposed solution.

  5. Constructing an Efficient Self-Tuning Aircraft Engine Model for Control and Health Management Applications

    NASA Technical Reports Server (NTRS)

    Armstrong, Jeffrey B.; Simon, Donald L.

    2012-01-01

    Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.

  6. Orthogonal feeding techniques for tapered slot antennas

    NASA Technical Reports Server (NTRS)

    Lee, Richard Q.; Simons, Rainee N.

    1998-01-01

    For array of "brick" configuration there are electrical and mechanical advantages to feed the antenna with a feed on a substrate perpendicular to the antenna substrate. Different techniques have been proposed for exciting patch antennas using such a feed structure.Rncently, an aperture-coupled dielectric resonator antenna using a perpendicular feed substrate has been demonstrated to have very good power coupling efficiency. For a two-dimensional rectangular array with tapered slot antenna elements, a power combining network on perpendicular substrate is generally required to couple power to or from the array. In this paper, we will describe two aperture-coupled techniques for coupling microwave power from a linearly tapered slot antenna (LTSA) to a microstrip feed on a perpendicular substrate. In addition, we will present measured results for return losses and radiation patterns.

  7. Slaughterhouse Wastewater Treatment by Combined Chemical Coagulation and Electrocoagulation Process

    PubMed Central

    Bazrafshan, Edris; Kord Mostafapour, Ferdos; Farzadkia, Mehdi; Ownagh, Kamal Aldin; Mahvi, Amir Hossein

    2012-01-01

    Slaughterhouse wastewater contains various and high amounts of organic matter (e.g., proteins, blood, fat and lard). In order to produce an effluent suitable for stream discharge, chemical coagulation and electrocoagulation techniques have been particularly explored at the laboratory pilot scale for organic compounds removal from slaughterhouse effluent. The purpose of this work was to investigate the feasibility of treating cattle-slaughterhouse wastewater by combined chemical coagulation and electrocoagulation process to achieve the required standards. The influence of the operating variables such as coagulant dose, electrical potential and reaction time on the removal efficiencies of major pollutants was determined. The rate of removal of pollutants linearly increased with increasing doses of PACl and applied voltage. COD and BOD5 removal of more than 99% was obtained by adding 100 mg/L PACl and applied voltage 40 V. The experiments demonstrated the effectiveness of chemical and electrochemical techniques for the treatment of slaughterhouse wastewaters. Consequently, combined processes are inferred to be superior to electrocoagulation alone for the removal of both organic and inorganic compounds from cattle-slaughterhouse wastewater. PMID:22768233

  8. Slaughterhouse wastewater treatment by combined chemical coagulation and electrocoagulation process.

    PubMed

    Bazrafshan, Edris; Kord Mostafapour, Ferdos; Farzadkia, Mehdi; Ownagh, Kamal Aldin; Mahvi, Amir Hossein

    2012-01-01

    Slaughterhouse wastewater contains various and high amounts of organic matter (e.g., proteins, blood, fat and lard). In order to produce an effluent suitable for stream discharge, chemical coagulation and electrocoagulation techniques have been particularly explored at the laboratory pilot scale for organic compounds removal from slaughterhouse effluent. The purpose of this work was to investigate the feasibility of treating cattle-slaughterhouse wastewater by combined chemical coagulation and electrocoagulation process to achieve the required standards. The influence of the operating variables such as coagulant dose, electrical potential and reaction time on the removal efficiencies of major pollutants was determined. The rate of removal of pollutants linearly increased with increasing doses of PACl and applied voltage. COD and BOD(5) removal of more than 99% was obtained by adding 100 mg/L PACl and applied voltage 40 V. The experiments demonstrated the effectiveness of chemical and electrochemical techniques for the treatment of slaughterhouse wastewaters. Consequently, combined processes are inferred to be superior to electrocoagulation alone for the removal of both organic and inorganic compounds from cattle-slaughterhouse wastewater.

  9. Resonant-spin-ordering of vortex cores in interacting mesomagnets

    NASA Astrophysics Data System (ADS)

    Jain, Shikha

    2013-03-01

    The magnetic system of interacting vortex-state elements have a dynamically reconfigurable ground state characterized by different relative polarities and chiralities of the individual disks; and have a corresponding dynamically controlled spectrum of collective excitation modes that determine the microwave absorption of the crystal. The development of effective methods for dynamic control of the ground state in this vortex-type magnonic crystal is of interest both from fundamental and technological viewpoints. Control of vortex chirality has been demonstrated previously using various techniques; however, control and manipulation of vortex polarities remain challenging. In this work, we present a robust and efficient way of selecting the ground state configuration of interacting magnetic elements using resonant-spin-ordering approach. This is achieved by driving the system from the linear regime of constant vortex gyrations to the non-linear regime of vortex-core reversals at a fixed excitation frequency of one of the coupled modes. Subsequently reducing the excitation field to the linear regime stabilizes the system to a polarity combination whose resonant frequency is decoupled from the initialization frequency. We have utilized the resonant approach to transition between the two polarity combinations (parallel or antiparallel) in a model system of connected dot-pairs which may form the building blocks of vortex-based magnonic crystals. Taking a step further, we have extended the technique by studying many-particle system for its potential as spin-torque oscillators or logic devices. Work at Argonne was supported by the U. S. DOE, Office of BES, under Contract No. DE-AC02-06CH11357. This work was in part supported by grant DMR-1015175 from the U. S. National Science Foundation, by a Contract from the U.S. Army TARDEC and RDECOM.

  10. Discovering Optimum Method to Extract Depth Information for Nearshore Coastal Waters from SENTINEL-2A - Case Study: Nayband Bay, Iran

    NASA Astrophysics Data System (ADS)

    Kabiri, K.

    2017-09-01

    The capabilities of Sentinel-2A imagery to determine bathymetric information in shallow coastal waters were examined. In this regard, two Sentinel-2A images (acquired on February and March 2016 in calm weather and relatively low turbidity) were selected from Nayband Bay, located in the northern Persian Gulf. In addition, a precise and accurate bathymetric map for the study area were obtained and used for both calibrating the models and validating the results. Traditional linear and ratio transform techniques, as well as a novel integrated method, were employed to determine depth values. All possible combinations of the three bands (Band 2: blue (458-523 nm), Band 3: green (543-578 nm), and Band 4: red (650-680 nm), spatial resolution: 10 m) have been considered (11 options) using the traditional linear and ratio transform techniques, together with 10 model options for the integrated method. The accuracy of each model was assessed by comparing the determined bathymetric information with field measured values. The correlation coefficients (R2), and root mean square errors (RMSE) for validation points were calculated for all models and for two satellite images. When compared with the linear transform method, the method employing ratio transformation with a combination of all three bands yielded more accurate results (R2Mac = 0.795, R2Feb = 0.777, RMSEMac = 1.889 m, and RMSEFeb =2.039 m). Although most of the integrated transform methods (specifically the method including all bands and band ratios) have yielded the highest accuracy, these increments were not significant, hence the ratio transformation has selected as optimum method.

  11. A numerical algorithm for optimal feedback gains in high dimensional linear quadratic regulator problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1991-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problem is proposed. The method, which combines use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite-dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantages of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed, and numerical evidence of the efficacy of these ideas is presented.

  12. An application of locally linear model tree algorithm with combination of feature selection in credit scoring

    NASA Astrophysics Data System (ADS)

    Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad

    2014-10-01

    Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.

  13. Financial Distress Prediction using Linear Discriminant Analysis and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Santoso, Noviyanti; Wibowo, Wahyu

    2018-03-01

    A financial difficulty is the early stages before the bankruptcy. Bankruptcies caused by the financial distress can be seen from the financial statements of the company. The ability to predict financial distress became an important research topic because it can provide early warning for the company. In addition, predicting financial distress is also beneficial for investors and creditors. This research will be made the prediction model of financial distress at industrial companies in Indonesia by comparing the performance of Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) combined with variable selection technique. The result of this research is prediction model based on hybrid Stepwise-SVM obtains better balance among fitting ability, generalization ability and model stability than the other models.

  14. Reduction of a linear complex model for respiratory system during Airflow Interruption.

    PubMed

    Jablonski, Ireneusz; Mroczka, Janusz

    2010-01-01

    The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.

  15. An analysis of a nonlinear instability in the implementation of a VTOL control system

    NASA Technical Reports Server (NTRS)

    Weber, J. M.

    1982-01-01

    The contributions to nonlinear behavior and unstable response of the model following yaw control system of a VTOL aircraft during hover were determined. The system was designed as a state rate feedback implicit model follower that provided yaw rate command/heading hold capability and used combined full authority parallel and limited authority series servo actuators to generate an input to the yaw reaction control system of the aircraft. Both linear and nonlinear system models, as well as describing function linearization techniques were used to determine the influence on the control system instability of input magnitude and bandwidth, series servo authority, and system bandwidth. Results of the analysis describe stability boundaries as a function of these system design characteristics.

  16. Nouvelles bornes et estimations pour les milieux poreux à matrice rigide parfaitement plastique

    NASA Astrophysics Data System (ADS)

    Bilger, Nicolas; Auslender, François; Bornert, Michel; Masson, Renaud

    We derive new rigorous bounds and self-consistent estimates for the effective yield surface of porous media with a rigid perfectly plastic matrix and a microstructure similar to Hashin's composite spheres assemblage. These results arise from a homogenisation technique that combines a pattern-based modelling for linear composite materials and a variational formulation for nonlinear media. To cite this article: N. Bilger et al., C. R. Mecanique 330 (2002) 127-132.

  17. A note on the effects of viscosity on the stability of a trailing-line vortex

    NASA Technical Reports Server (NTRS)

    Duck, Peter W.; Khorrami, Mehdi R.

    1992-01-01

    The linear stability of the Batchelor (1964) vortex is examined with emphasis on new viscous modes recently found numerically by Khorrami (1991). Unlike the previously reported inviscid modes of instability, these modes are destabilized by viscosity and exhibit small growth rates at large Reynolds numbers. The analysis presented here uses a combination of asymptotic and numerical techniques. The results confirm the existence of the additional modes of instability due to viscosity.

  18. Lenslet array processors.

    PubMed

    Glaser, I

    1982-04-01

    By combining a lenslet array with masks it is possible to obtain a noncoherent optical processor capable of computing in parallel generalized 2-D discrete linear transformations. We present here an analysis of such lenslet array processors (LAP). The effect of several errors, including optical aberrations, diffraction, vignetting, and geometrical and mask errors, are calculated, and guidelines to optical design of LAP are derived. Using these results, both ultimate and practical performances of LAP are compared with those of competing techniques.

  19. Dimensional accuracy of pickup implant impression: an in vitro comparison of novel modular versus standard custom trays.

    PubMed

    Simeone, Piero; Valentini, Pier Paolo; Pizzoferrato, Roberto; Scudieri, Folco

    2011-01-01

    The purpose of this in vitro study was to compare the dimensional accuracy of the pickup impression technique using a modular individual tray (MIT) and using a standard individual tray (ST) for multiple internal-connection implants. The roles of both materials and geometric misfits were considered. First, because the MIT relies on the stiffness and elasticity of acrylic resin material, a preliminary investigation of the resin volume contraction during curing and polymerization was done. Then, two sets of specimens were tested to compare the accuracy of the MIT (test group) to that of the ST (control group). The linear and angular displacements of the transfer copings were measured and compared during three different stages of the impression procedure. Experimental measurements were performed with a computerized coordinate measuring machine. The curing dynamic of the acrylic resin was strongly dependent on the physical properties of the acrylic material and the powder/liquid ratio. Specifically, an increase in the powder/liquid ratio accelerated resin polymerization (curing time decreases by 70%) and reduced the final volume contraction by 45%. However, the total shrinkage never exceeded the elastic limits of the material; hence, it did not affect the coping's stability. In the test group, linear errors were reduced by 55% and angular errors were reduced by 65%. Linear and angular displacements of the transfer copings were significantly reduced with the MIT technique, which led to higher dimensional accuracy versus the ST group. The MIT approach, in combination with a thin and uniform amount of acrylic resin in the pickup impression technique, showed no significant permanent distortions in multiple misalignment internal-connection implants compared to the ST technique.

  20. An SVM-Based Solution for Fault Detection in Wind Turbines

    PubMed Central

    Santos, Pedro; Villa, Luisa F.; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-01-01

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets. PMID:25760051

  1. Techniques for detumbling a disabled space base

    NASA Technical Reports Server (NTRS)

    Kaplan, M. H.

    1973-01-01

    Techniques and conceptual devices for carrying out detumbling operations are examined, and progress in the development of these concepts is discussed. Devices which reduce tumble to simple spin through active linear motion of a small mass are described, together with a Module for Automatic Dock and Detumble (MADD) that could perform an orbital transfer from the shuttle in order to track and dock at a preselected point on the distressed craft. Once docked, MADD could apply torques by firing thrustors to detumble the passive vehicle. Optimum combinations of mass-motion and external devices for various situation should be developed. The need for completely formulating the automatic control logic of MADD is also emphasized.

  2. Efficient Translation of LTL Formulae into Buchi Automata

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Lerda, Flavio

    2001-01-01

    Model checking is a fully automated technique for checking that a system satisfies a set of required properties. With explicit-state model checkers, properties are typically defined in linear-time temporal logic (LTL), and are translated into B chi automata in order to be checked. This report presents how we have combined and improved existing techniques to obtain an efficient LTL to B chi automata translator. In particular, we optimize the core of existing tableau-based approaches to generate significantly smaller automata. Our approach has been implemented and is being released as part of the Java PathFinder software (JPF), an explicit state model checker under development at the NASA Ames Research Center.

  3. Generation of continuously rotating polarization by combining cross-polarizations and its application in surface structuring.

    PubMed

    Lam, Billy; Zhang, Jihua; Guo, Chunlei

    2017-08-01

    In this study, we develop a simple but highly effective technique that generates a continuously varying polarization within a laser beam. This is achieved by having orthogonal linear polarizations on each side of the beam. By simply focusing such a laser beam, we can attain a gradually and continuously changing polarization within the entire Rayleigh range due to diffraction. To demonstrate this polarization distribution, we apply this laser beam onto a metal surface and create a continuously rotating laser induced periodic surface structure pattern. This technique provides a very effective way to produce complex surface structures that may potentially find applications, such as polarization modulators and metasurfaces.

  4. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, C. Kristopher; Hauck, Cory D.

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  5. Plasmonic modes in nanowire dimers: A study based on the hydrodynamic Drude model including nonlocal and nonlinear effects

    NASA Astrophysics Data System (ADS)

    Moeferdt, Matthias; Kiel, Thomas; Sproll, Tobias; Intravaia, Francesco; Busch, Kurt

    2018-02-01

    A combined analytical and numerical study of the modes in two distinct plasmonic nanowire systems is presented. The computations are based on a discontinuous Galerkin time-domain approach, and a fully nonlinear and nonlocal hydrodynamic Drude model for the metal is utilized. In the linear regime, these computations demonstrate the strong influence of nonlocality on the field distributions as well as on the scattering and absorption spectra. Based on these results, second-harmonic-generation efficiencies are computed over a frequency range that covers all relevant modes of the linear spectra. In order to interpret the physical mechanisms that lead to corresponding field distributions, the associated linear quasielectrostatic problem is solved analytically via conformal transformation techniques. This provides an intuitive classification of the linear excitations of the systems that is then applied to the full Maxwell case. Based on this classification, group theory facilitates the determination of the selection rules for the efficient excitation of modes in both the linear and nonlinear regimes. This leads to significantly enhanced second-harmonic generation via judiciously exploiting the system symmetries. These results regarding the mode structure and second-harmonic generation are of direct relevance to other nanoantenna systems.

  6. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE PAGES

    Garrett, C. Kristopher; Hauck, Cory D.

    2018-04-05

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  7. Performance of Ti-multilayer coated tool during machining of MDN431 alloyed steel

    NASA Astrophysics Data System (ADS)

    Badiger, Pradeep V.; Desai, Vijay; Ramesh, M. R.

    2018-04-01

    Turbine forgings and other components are required to be high resistance to corrosion and oxidation because which they are highly alloyed with Ni and Cr. Midhani manufactures one of such material MDN431. It's a hard-to-machine steel with high hardness and strength. PVD coated insert provide an answer to problem with its state of art technique on the WC tool. Machinability studies is carried out on MDN431 steel using uncoated and Ti-multilayer coated WC tool insert using Taguchi optimisation technique. During the present investigation, speed (398-625rpm), feed (0.093-0.175mm/rev), and depth of cut (0.2-0.4mm) varied according to Taguchi L9 orthogonal array, subsequently cutting forces and surface roughness (Ra) were measured. Optimizations of the obtained results are done using Taguchi technique for cutting forces and surface roughness. Using Taguchi technique linear fit model regression analysis carried out for the combination of each input variable. Experimented results are compared and found the developed model is adequate which supported by proof trials. Speed, feed and depth of cut are linearly dependent on the cutting force and surface roughness for uncoated insert whereas Speed and depth of cut feed is inversely dependent in coated insert for both cutting force and surface roughness. Machined surface for coated and uncoated inserts during machining of MDN431 is studied using optical profilometer.

  8. Stimulation and inhibition of bacterial growth by caffeine dependent on chloramphenicol and a phenolic uncoupler--a ternary toxicity study using microfluid segment technique.

    PubMed

    Cao, Jialan; Kürsten, Dana; Schneider, Steffen; Köhler, J Michael

    2012-10-01

    A droplet-based microfluidic technique for the fast generation of three dimensional concentration spaces within nanoliter segments was introduced. The technique was applied for the evaluation of the effect of two selected antibiotic substances on the toxicity and activation of bacterial growth by caffeine. Therefore a three-dimensional concentration space was completely addressed by generating large sequences with about 1150 well separated microdroplets containing 216 different combinations of concentrations. To evaluate the toxicity of the ternary mixtures a time-resolved miniaturized optical double endpoint detection unit using a microflow-through fluorimeter and a two channel microflow-through photometer was used for the simultaneous analysis of changes on the endogenous cellular fluorescence signal and on the cell density of E. coli cultivated inside 500 nL microfluid segments. Both endpoints supplied similar results for the dose related cellular response. Strong non-linear combination effects, concentration dependent stimulation and the formation of activity summits on bolographic maps were determined. The results reflect a complex response of growing bacterial cultures in dependence on the combined effectors. A strong caffeine induced enhancement of bacterial growth was found at sublethal chloramphenicol and sublethal 2,4-dinitrophenol concentrations. The reliability of the method was proved by a high redundancy of fluidic experiments. The results indicate the importance of multi-parameter investigations for toxicological studies and prove the potential of the microsegmented flow technique for such requirements.

  9. Mixed H∞ and passive control for linear switched systems via hybrid control approach

    NASA Astrophysics Data System (ADS)

    Zheng, Qunxian; Ling, Youzhu; Wei, Lisheng; Zhang, Hongbin

    2018-03-01

    This paper investigates the mixed H∞ and passive control problem for linear switched systems based on a hybrid control strategy. To solve this problem, first, a new performance index is proposed. This performance index can be viewed as the mixed weighted H∞ and passivity performance. Then, the hybrid controllers are used to stabilise the switched systems. The hybrid controllers consist of dynamic output-feedback controllers for every subsystem and state updating controllers at the switching instant. The design of state updating controllers not only depends on the pre-switching subsystem and the post-switching subsystem, but also depends on the measurable output signal. The hybrid controllers proposed in this paper can include some existing ones as special cases. Combine the multiple Lyapunov functions approach with the average dwell time technique, new sufficient conditions are obtained. Under the new conditions, the closed-loop linear switched systems are globally uniformly asymptotically stable with a mixed H∞ and passivity performance index. Moreover, the desired hybrid controllers can be constructed by solving a set of linear matrix inequalities. Finally, a numerical example and a practical example are given.

  10. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1982-01-01

    The feasibility of modeling magnetic fields due to certain electrical currents flowing in the Earth's ionosphere and magnetosphere was investigated. A method was devised to carry out forward modeling of the magnetic perturbations that arise from space currents. The procedure utilizes a linear current element representation of the distributed electrical currents. The finite thickness elements are combined into loops which are in turn combined into cells having their base in the ionosphere. In addition to the extensive field modeling, additional software was developed for the reduction and analysis of the MAGSAT data in terms of the external current effects. Direct comparisons between the models and the MAGSAT data are possible.

  11. Is 3D true non linear traveltime tomography reasonable ?

    NASA Astrophysics Data System (ADS)

    Herrero, A.; Virieux, J.

    2003-04-01

    The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.

  12. Phylogenetic mixtures and linear invariants for equal input models.

    PubMed

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  13. Locally linear regression for pose-invariant face recognition.

    PubMed

    Chai, Xiujuan; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2007-07-01

    The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.

  14. Image quality improvement in cone-beam CT using the super-resolution technique.

    PubMed

    Oyama, Asuka; Kumagai, Shinobu; Arai, Norikazu; Takata, Takeshi; Saikawa, Yusuke; Shiraishi, Kenshiro; Kobayashi, Takenori; Kotoku, Jun'ichi

    2018-04-05

    This study was conducted to improve cone-beam computed tomography (CBCT) image quality using the super-resolution technique, a method of inferring a high-resolution image from a low-resolution image. This technique is used with two matrices, so-called dictionaries, constructed respectively from high-resolution and low-resolution image bases. For this study, a CBCT image, as a low-resolution image, is represented as a linear combination of atoms, the image bases in the low-resolution dictionary. The corresponding super-resolution image was inferred by multiplying the coefficients and the high-resolution dictionary atoms extracted from planning CT images. To evaluate the proposed method, we computed the root mean square error (RMSE) and structural similarity (SSIM). The resulting RMSE and SSIM between the super-resolution images and the planning CT images were, respectively, as much as 0.81 and 1.29 times better than those obtained without using the super-resolution technique. We used super-resolution technique to improve the CBCT image quality.

  15. Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions

    NASA Astrophysics Data System (ADS)

    Wrench, Alan A.

    Available from UMI in association with The British Library. Requires signed TDF. The Linear Prediction model was first applied to speech two and a half decades ago. Since then it has been the subject of intense research and continues to be one of the principal tools in the analysis of speech. Its mathematical tractability makes it a suitable subject for study and its proven success in practical applications makes the study worthwhile. The model is known to be unsuited to speech corrupted by background noise. This has led many researchers to investigate ways of enhancing the speech signal prior to Linear Predictive analysis. In this thesis this body of work is extended. The chosen application is low bit-rate (2.4 kbits/sec) speech coding. For this task the performance of the Linear Prediction algorithm is crucial because there is insufficient bandwidth to encode the error between the modelled speech and the original input. A review of the fundamentals of Linear Prediction and an independent assessment of the relative performance of methods of Linear Prediction modelling are presented. A new method is proposed which is fast and facilitates stability checking, however, its stability is shown to be unacceptably poorer than existing methods. A novel supposition governing the positioning of the analysis frame relative to a voiced speech signal is proposed and supported by observation. The problem of coding noisy speech is examined. Four frequency domain speech processing techniques are developed and tested. These are: (i) Combined Order Linear Prediction Spectral Estimation; (ii) Frequency Scaling According to an Aural Model; (iii) Amplitude Weighting Based on Perceived Loudness; (iv) Power Spectrum Squaring. These methods are compared with the Recursive Linearised Maximum a Posteriori method. Following on from work done in the frequency domain, a time domain implementation of spectrum squaring is developed. In addition, a new method of power spectrum estimation is developed based on the Minimum Variance approach. This new algorithm is shown to be closely related to Linear Prediction but produces slightly broader spectral peaks. Spectrum squaring is applied to both the new algorithm and standard Linear Prediction and their relative performance is assessed. (Abstract shortened by UMI.).

  16. Electronic structure studies of La2CuO4

    NASA Astrophysics Data System (ADS)

    Wachs, A. L.; Turchi, P. E. A.; Jean, Y. C.; Wetzler, K. H.; Howell, R. H.; Fluss, M. J.; Harshman, D. R.; Remeika, J. P.; Cooper, A. S.; Fleming, R. M.

    1988-07-01

    We report results of positron-electron momentum-distribution measurements of single-crystal La2CuO4 using two-dimensional angular correlation of positron-annihilation-radiation techniques. The data contain two components: a large (~85%), isotropic corelike electron contribution and a remaining, anisotropic valence-electron contribution modeled using a linear combination of atomic orbitals-molecular orbital method and a localized ion scheme, within the independent-particle model approximation. This work suggests a ligand-field Hamiltonian to be justified for describing the electronic properties of perovskite materials.

  17. A method for the analysis of nonlinearities in aircraft dynamic response to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1976-01-01

    An analytical method is developed which combines the equivalent linearization technique for the analysis of the response of nonlinear dynamic systems with the amplitude modulated random process (Press model) for atmospheric turbulence. The method is initially applied to a bilinear spring system. The analysis of the response shows good agreement with exact results obtained by the Fokker-Planck equation. The method is then applied to an example of control-surface displacement limiting in an aircraft with a pitch-hold autopilot.

  18. Identification of temporal variations in mental workload using locally-linear-embedding-based EEG feature reduction and support-vector-machine-based clustering and classification techniques.

    PubMed

    Yin, Zhong; Zhang, Jianhua

    2014-07-01

    Identifying the abnormal changes of mental workload (MWL) over time is quite crucial for preventing the accidents due to cognitive overload and inattention of human operators in safety-critical human-machine systems. It is known that various neuroimaging technologies can be used to identify the MWL variations. In order to classify MWL into a few discrete levels using representative MWL indicators and small-sized training samples, a novel EEG-based approach by combining locally linear embedding (LLE), support vector clustering (SVC) and support vector data description (SVDD) techniques is proposed and evaluated by using the experimentally measured data. The MWL indicators from different cortical regions are first elicited by using the LLE technique. Then, the SVC approach is used to find the clusters of these MWL indicators and thereby to detect MWL variations. It is shown that the clusters can be interpreted as the binary class MWL. Furthermore, a trained binary SVDD classifier is shown to be capable of detecting slight variations of those indicators. By combining the two schemes, a SVC-SVDD framework is proposed, where the clear-cut (smaller) cluster is detected by SVC first and then a subsequent SVDD model is utilized to divide the overlapped (larger) cluster into two classes. Finally, three-class MWL levels (low, normal and high) can be identified automatically. The experimental data analysis results are compared with those of several existing methods. It has been demonstrated that the proposed framework can lead to acceptable computational accuracy and has the advantages of both unsupervised and supervised training strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. An option for delta-shaped gastroduodenostomy in totally laparoscopic distal gastrectomy for gastric cancer: A single-layer suturing technique for the stapler entry hole using knotless barbed sutures combined with the application of additional knotted sutures.

    PubMed

    Tokuhara, Takaya; Nakata, Eiji; Tenjo, Toshiyuki; Kawai, Isao; Kondo, Keisaku; Ueda, Hirofumi; Tomioka, Atsushi

    2018-01-01

    We report an option for delta-shaped gastroduodenostomy in totally laparoscopic distal gastrectomy (TLDG) for gastric cancer. We detail a single-layer suturing technique for the endoscopic linear stapler entry hole using knotless barbed sutures combined with the application of additional knotted sutures. From June 2013 to February 2017, we performed TLDG with delta-shaped gastroduodenostomy in 20 patients with gastric cancer. The linear stapler was closed and fired to attach the posterior walls of the remnant stomach and the duodenum together. After creating a good view of the greater curvature side of the entry hole for the stapler by retracting the knotted suture on the lesser curvature side toward the ventral side, we performed single-layer entire-thickness continuous suturing of this hole using a 15-cm-long barbed suture running from the greater curvature side to the lesser curvature side. We placed the second and third stitches between the seromuscular layer of the remnant stomach and the entire-thickness layer of the duodenum while suturing the duodenal mucosa as minutely as possible. In addition, we routinely added one or two entire-thickness knotted sutures at the site near the greater curvature side. We placed similar additional knotted sutures at the site with a broad pitch. TLDG with this reconstruction technique was successfully performed in all patients with no occurrences of anastomotic leakage or intraabdominal abscess around the anastomosis. It is suggested that this method can be one option for delta-shaped gastroduodenostomy in TLDG due to its cost-effectiveness and feasibility.

  20. An option for delta-shaped gastroduodenostomy in totally laparoscopic distal gastrectomy for gastric cancer: A single-layer suturing technique for the stapler entry hole using knotless barbed sutures combined with the application of additional knotted sutures

    PubMed Central

    Tokuhara, Takaya; Nakata, Eiji; Tenjo, Toshiyuki; Kawai, Isao; Kondo, Keisaku; Ueda, Hirofumi; Tomioka, Atsushi

    2018-01-01

    We report an option for delta-shaped gastroduodenostomy in totally laparoscopic distal gastrectomy (TLDG) for gastric cancer. We detail a single-layer suturing technique for the endoscopic linear stapler entry hole using knotless barbed sutures combined with the application of additional knotted sutures. From June 2013 to February 2017, we performed TLDG with delta-shaped gastroduodenostomy in 20 patients with gastric cancer. The linear stapler was closed and fired to attach the posterior walls of the remnant stomach and the duodenum together. After creating a good view of the greater curvature side of the entry hole for the stapler by retracting the knotted suture on the lesser curvature side toward the ventral side, we performed single-layer entire-thickness continuous suturing of this hole using a 15-cm-long barbed suture running from the greater curvature side to the lesser curvature side. We placed the second and third stitches between the seromuscular layer of the remnant stomach and the entire-thickness layer of the duodenum while suturing the duodenal mucosa as minutely as possible. In addition, we routinely added one or two entire-thickness knotted sutures at the site near the greater curvature side. We placed similar additional knotted sutures at the site with a broad pitch. TLDG with this reconstruction technique was successfully performed in all patients with no occurrences of anastomotic leakage or intraabdominal abscess around the anastomosis. It is suggested that this method can be one option for delta-shaped gastroduodenostomy in TLDG due to its cost-effectiveness and feasibility. PMID:29375711

  1. Single-shot imaging with higher-dimensional encoding using magnetic field monitoring and concomitant field correction.

    PubMed

    Testud, Frederik; Gallichan, Daniel; Layton, Kelvin J; Barmet, Christoph; Welz, Anna M; Dewdney, Andrew; Cocosco, Chris A; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim

    2015-03-01

    PatLoc (Parallel Imaging Technique using Localized Gradients) accelerates imaging and introduces a resolution variation across the field-of-view. Higher-dimensional encoding employs more spatial encoding magnetic fields (SEMs) than the corresponding image dimensionality requires, e.g. by applying two quadratic and two linear spatial encoding magnetic fields to reconstruct a 2D image. Images acquired with higher-dimensional single-shot trajectories can exhibit strong artifacts and geometric distortions. In this work, the source of these artifacts is analyzed and a reliable correction strategy is derived. A dynamic field camera was built for encoding field calibration. Concomitant fields of linear and nonlinear spatial encoding magnetic fields were analyzed. A combined basis consisting of spherical harmonics and concomitant terms was proposed and used for encoding field calibration and image reconstruction. A good agreement between the analytical solution for the concomitant fields and the magnetic field simulations of the custom-built PatLoc SEM coil was observed. Substantial image quality improvements were obtained using a dynamic field camera for encoding field calibration combined with the proposed combined basis. The importance of trajectory calibration for single-shot higher-dimensional encoding is demonstrated using the combined basis including spherical harmonics and concomitant terms, which treats the concomitant fields as an integral part of the encoding. © 2014 Wiley Periodicals, Inc.

  2. Diffuse Optical Tomography for Brain Imaging: Continuous Wave Instrumentation and Linear Analysis Methods

    NASA Astrophysics Data System (ADS)

    Giacometti, Paolo; Diamond, Solomon G.

    Diffuse optical tomography (DOT) is a functional brain imaging technique that measures cerebral blood oxygenation and blood volume changes. This technique is particularly useful in human neuroimaging measurements because of the coupling between neural and hemodynamic activity in the brain. DOT is a multichannel imaging extension of near-infrared spectroscopy (NIRS). NIRS uses laser sources and light detectors on the scalp to obtain noninvasive hemodynamic measurements from spectroscopic analysis of the remitted light. This review explains how NIRS data analysis is performed using a combination of the modified Beer-Lambert law (MBLL) and the diffusion approximation to the radiative transport equation (RTE). Laser diodes, photodiode detectors, and optical terminals that contact the scalp are the main components in most NIRS systems. Placing multiple sources and detectors over the surface of the scalp allows for tomographic reconstructions that extend the individual measurements of NIRS into DOT. Mathematically arranging the DOT measurements into a linear system of equations that can be inverted provides a way to obtain tomographic reconstructions of hemodynamics in the brain.

  3. An in-depth stability analysis of nonuniform FDTD combined with novel local implicitization techniques

    NASA Astrophysics Data System (ADS)

    Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries

    2017-08-01

    This work focuses on efficient full-wave solutions of multiscale electromagnetic problems in the time domain. Three local implicitization techniques are proposed and carefully analyzed in order to relax the traditional time step limit of the Finite-Difference Time-Domain (FDTD) method on a nonuniform, staggered, tensor product grid: Newmark, Crank-Nicolson (CN) and Alternating-Direction-Implicit (ADI) implicitization. All of them are applied in preferable directions, alike Hybrid Implicit-Explicit (HIE) methods, as to limit the rank of the sparse linear systems. Both exponential and linear stability are rigorously investigated for arbitrary grid spacings and arbitrary inhomogeneous, possibly lossy, isotropic media. Numerical examples confirm the conservation of energy inside a cavity for a million iterations if the time step is chosen below the proposed, relaxed limit. Apart from the theoretical contributions, new accomplishments such as the development of the leapfrog Alternating-Direction-Hybrid-Implicit-Explicit (ADHIE) FDTD method and a less stringent Courant-like time step limit for the conventional, fully explicit FDTD method on a nonuniform grid, have immediate practical applications.

  4. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    PubMed Central

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  5. Fast global image smoothing based on weighted least squares.

    PubMed

    Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N

    2014-12-01

    This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

  6. The design of a turboshaft speed governor using modern control techniques

    NASA Technical Reports Server (NTRS)

    Delosreyes, G.; Gouchoe, D. R.

    1986-01-01

    The objectives of this program were: to verify the model of off schedule compressor variable geometry in the T700 turboshaft engine nonlinear model; to evaluate the use of the pseudo-random binary noise (PRBN) technique for obtaining engine frequency response data; and to design a high performance power turbine speed governor using modern control methods. Reduction of T700 engine test data generated at NASA-Lewis indicated that the off schedule variable geometry effects were accurate as modeled. Analysis also showed that the PRBN technique combined with the maximum likelihood model identification method produced a Bode frequency response that was as accurate as the response obtained from standard sinewave testing methods. The frequency response verified the accuracy of linear models consisting of engine partial derivatives and used for design. A power turbine governor was designed using the Linear Quadratic Regulator (LQR) method of full state feedback control. A Kalman filter observer was used to estimate helicopter main rotor blade velocity. Compared to the baseline T700 power turbine speed governor, the LQR governor reduced droop up to 25 percent for a 490 shaft horsepower transient in 0.1 sec simulating a wind gust, and up to 85 percent for a 700 shaft horsepower transient in 0.5 sec simulating a large collective pitch angle transient.

  7. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  8. ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations

    NASA Astrophysics Data System (ADS)

    Merkel, M.; Niyonzima, I.; Schöps, S.

    2017-12-01

    Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.

  9. VizieR Online Data Catalog: HARPS timeseries data for HD41248 (Jenkins+, 2014)

    NASA Astrophysics Data System (ADS)

    Jenkins, J. S.; Tuomi, M.

    2017-05-01

    We modeled the HARPS radial velocities of HD 42148 by adopting the analysis techniques and the statistical model applied in Tuomi et al. (2014, arXiv:1405.2016). This model contains Keplerian signals, a linear trend, a moving average component with exponential smoothing, and linear correlations with activity indices, namely, BIS, FWHM, and chromospheric activity S index. We applied our statistical model outlined above to the full data set of radial velocities for HD 41248, combining the previously published data in Jenkins et al. (2013ApJ...771...41J) with the newly published data in Santos et al. (2014, J/A+A/566/A35), giving rise to a total time series of 223 HARPS (Mayor et al. 2003Msngr.114...20M) velocities. (1 data file).

  10. [Research on the methods for multi-class kernel CSP-based feature extraction].

    PubMed

    Wang, Jinjia; Zhang, Lingzhi; Hu, Bei

    2012-04-01

    To relax the presumption of strictly linear patterns in the common spatial patterns (CSP), we studied the kernel CSP (KCSP). A new multi-class KCSP (MKCSP) approach was proposed in this paper, which combines the kernel approach with multi-class CSP technique. In this approach, we used kernel spatial patterns for each class against all others, and extracted signal components specific to one condition from EEG data sets of multiple conditions. Then we performed classification using the Logistic linear classifier. Brain computer interface (BCI) competition III_3a was used in the experiment. Through the experiment, it can be proved that this approach could decompose the raw EEG singles into spatial patterns extracted from multi-class of single trial EEG, and could obtain good classification results.

  11. Scanning electron microscopy combined with image processing technique: Analysis of microstructure, texture and tenderness in Semitendinous and Gluteus Medius bovine muscles.

    PubMed

    Pieniazek, Facundo; Messina, Valeria

    2016-11-01

    In this study the effect of freeze drying on the microstructure, texture, and tenderness of Semitendinous and Gluteus Medius bovine muscles were analyzed applying Scanning Electron Microscopy combined with image analysis. Samples were analyzed by Scanning Electron Microscopy at different magnifications (250, 500, and 1,000×). Texture parameters were analyzed by Texture analyzer and by image analysis. Tenderness by Warner-Bratzler shear force. Significant differences (p < 0.05) were obtained for image and instrumental texture features. A linear trend with a linear correlation was applied for instrumental and image features. Image texture features calculated from Gray Level Co-occurrence Matrix (homogeneity, contrast, entropy, correlation and energy) at 1,000× in both muscles had high correlations with instrumental features (chewiness, hardness, cohesiveness, and springiness). Tenderness showed a positive correlation in both muscles with image features (energy and homogeneity). Combing Scanning Electron Microscopy with image analysis can be a useful tool to analyze quality parameters in meat.Summary SCANNING 38:727-734, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  12. Combining experimental techniques with non-linear numerical models to assess the sorption of pesticides on soils

    NASA Astrophysics Data System (ADS)

    Magga, Zoi; Tzovolou, Dimitra N.; Theodoropoulou, Maria A.; Tsakiroglou, Christos D.

    2012-03-01

    The risk assessment of groundwater pollution by pesticides may be based on pesticide sorption and biodegradation kinetic parameters estimated with inverse modeling of datasets from either batch or continuous flow soil column experiments. In the present work, a chemical non-equilibrium and non-linear 2-site sorption model is incorporated into solute transport models to invert the datasets of batch and soil column experiments, and estimate the kinetic sorption parameters for two pesticides: N-phosphonomethyl glycine (glyphosate) and 2,4-dichlorophenoxy-acetic acid (2,4-D). When coupling the 2-site sorption model with the 2-region transport model, except of the kinetic sorption parameters, the soil column datasets enable us to estimate the mass-transfer coefficients associated with solute diffusion between mobile and immobile regions. In order to improve the reliability of models and kinetic parameter values, a stepwise strategy that combines batch and continuous flow tests with adequate true-to-the mechanism analytical of numerical models, and decouples the kinetics of purely reactive steps of sorption from physical mass-transfer processes is required.

  13. Semi-implicit integration factor methods on sparse grids for high-dimensional systems

    NASA Astrophysics Data System (ADS)

    Wang, Dongyong; Chen, Weitao; Nie, Qing

    2015-07-01

    Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method.

  14. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression

    PubMed Central

    Almedeij, Jaber

    2012-01-01

    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984

  15. Design and characterization of a linear Hencken-type burner

    NASA Astrophysics Data System (ADS)

    Campbell, M. F.; Bohlin, G. A.; Schrader, P. E.; Bambha, R. P.; Kliewer, C. J.; Johansson, K. O.; Michelsen, H. A.

    2016-11-01

    We have designed and constructed a Hencken-type burner that produces a 38-mm-long linear laminar partially premixed co-flow diffusion flame. This burner was designed to produce a linear flame for studies of soot chemistry, combining the benefit of the conventional Hencken burner's laminar flames with the advantage of the slot burner's geometry for optical measurements requiring a long interaction distance. It is suitable for measurements using optical imaging diagnostics, line-of-sight optical techniques, or off-axis optical-scattering methods requiring either a long or short path length through the flame. This paper presents details of the design and operation of this new burner. We also provide characterization information for flames produced by this burner, including relative flow-field velocities obtained using hot-wire anemometry, temperatures along the centerline extracted using direct one-dimensional coherent Raman imaging, soot volume fractions along the centerline obtained using laser-induced incandescence and laser extinction, and transmission electron microscopy images of soot thermophoretically sampled from the flame.

  16. Linear and nonlinear interpretation of the direct strike lightning response of the NASA F106B thunderstorm research aircraft

    NASA Technical Reports Server (NTRS)

    Rudolph, T. H.; Perala, R. A.

    1983-01-01

    The objective of the work reported here is to develop a methodology by which electromagnetic measurements of inflight lightning strike data can be understood and extended to other aircraft. A linear and time invariant approach based on a combination of Fourier transform and three dimensional finite difference techniques is demonstrated. This approach can obtain the lightning channel current in the absence of the aircraft for given channel characteristic impedance and resistive loading. The model is applied to several measurements from the NASA F106B lightning research program. A non-linear three dimensional finite difference code has also been developed to study the response of the F106B to a lightning leader attachment. This model includes three species air chemistry and fluid continuity equations and can incorporate an experimentally based streamer formulation. Calculated responses are presented for various attachment locations and leader parameters. The results are compared qualitatively with measured inflight data.

  17. Secretory immunoglobulin purification from whey by chromatographic techniques.

    PubMed

    Matlschweiger, Alexander; Engelmaier, Hannah; Himmler, Gottfried; Hahn, Rainer

    2017-08-15

    Secretory immunoglobulins (SIg) are a major fraction of the mucosal immune system and represent potential drug candidates. So far, platform technologies for their purification do not exist. SIg from animal whey was used as a model to develop a simple, efficient and potentially generic chromatographic purification process. Several chromatographic stationary phases were tested. A combination of two anion-exchange steps resulted in the highest purity. The key step was the use of a small-porous anion exchanger operated in flow-through mode. Diffusion of SIg into the resin particles was significantly hindered, while the main impurities, IgG and serum albumin, were bound. In this step, initial purity was increased from 66% to 89% with a step yield of 88%. In a second anion-exchange step using giga-porous material, SIg was captured and purified by step or linear gradient elution to obtain fractions with purities >95%. For the step gradient elution step yield of highly pure SIg was 54%. Elution of SIgA and SIgM with a linear gradient resulted in a step yield of 56% and 35%, respectively. Overall yields for both anion exchange steps were 43% for the combination of flow-through and step elution mode. Combination of flow-through and linear gradient elution mode resulted in a yield of 44% for SIgA and 39% for SIgM. The proposed process allows the purification of biologically active SIg from animal whey in preparative scale. For future applications, the process can easily be adopted for purification of recombinant secretory immunoglobulin species. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Analysis of calibration materials to improve dual-energy CT scanning for petrophysical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayyalasomavaiula, K.; McIntyre, D.; Jain, J.

    2011-01-01

    Dual energy CT-scanning is a rapidly emerging imaging technique employed in non-destructive evaluation of various materials. Although CT (Computerized Tomography) has been used for characterizing rocks and visualizing and quantifying multiphase flow through rocks for over 25 years, most of the scanning is done at a voltage setting above 100 kV for taking advantage of the Compton scattering (CS) effect, which responds to density changes. Below 100 kV the photoelectric effect (PE) is dominant which responds to the effective atomic numbers (Zeff), which is directly related to the photo electric factor. Using the combination of the two effects helps inmore » better characterization of reservoir rocks. The most common technique for dual energy CT-scanning relies on homogeneous calibration standards to produce the most accurate decoupled data. However, the use of calibration standards with impurities increases the probability of error in the reconstructed data and results in poor rock characterization. This work combines ICP-OES (inductively coupled plasma optical emission spectroscopy) and LIBS (laser induced breakdown spectroscopy) analytical techniques to quantify the type and level of impurities in a set of commercially purchased calibration standards used in dual-energy scanning. The Zeff data on the calibration standards with and without impurity data were calculated using the weighted linear combination of the various elements present and used in calculating Zeff using the dual energy technique. Results show 2 to 5% difference in predicted Zeff values which may affect the corresponding log calibrations. The effect that these techniques have on improving material identification data is discussed and analyzed. The workflow developed in this paper will translate to a more accurate material identification estimates for unknown samples and improve calibration of well logging tools.« less

  19. Techniques for Single System Integration of Elastic Simulation Features

    NASA Astrophysics Data System (ADS)

    Mitchell, Nathan M.

    Techniques for simulating the behavior of elastic objects have matured considerably over the last several decades, tackling diverse problems from non-linear models for incompressibility to accurate self-collisions. Alongside these contributions, advances in parallel hardware design and algorithms have made simulation more efficient and affordable than ever before. However, prior research often has had to commit to design choices that compromise certain simulation features to better optimize others, resulting in a fragmented landscape of solutions. For complex, real-world tasks, such as virtual surgery, a holistic approach is desirable, where complex behavior, performance, and ease of modeling are supported equally. This dissertation caters to this goal in the form of several interconnected threads of investigation, each of which contributes a piece of an unified solution. First, it will be demonstrated how various non-linear materials can be combined with lattice deformers to yield simulations with behavioral richness and a high potential for parallelism. This potential will be exploited to show how a hybrid solver approach based on large macroblocks can accelerate the convergence of these deformers. Further extensions of the lattice concept with non-manifold topology will allow for efficient processing of self-collisions and topology change. Finally, these concepts will be explored in the context of a case study on virtual plastic surgery, demonstrating a real-world problem space where these ideas can be combined to build an expressive authoring tool, allowing surgeons to record procedures digitally for future reference or education.

  20. A comparison between index of entropy and catastrophe theory methods for mapping groundwater potential in an arid region.

    PubMed

    Al-Abadi, Alaa M; Shahid, Shamsuddin

    2015-09-01

    In this study, index of entropy and catastrophe theory methods were used for demarcating groundwater potential in an arid region using weighted linear combination techniques in geographical information system (GIS) environment. A case study from Badra area in the eastern part of central of Iraq was analyzed and discussed. Six factors believed to have influence on groundwater occurrence namely elevation, slope, aquifer transmissivity and storativity, soil, and distance to fault were prepared as raster thematic layers to facility integration into GIS environment. The factors were chosen based on the availability of data and local conditions of the study area. Both techniques were used for computing weights and assigning ranks vital for applying weighted linear combination approach. The results of application of both modes indicated that the most influential groundwater occurrence factors were slope and elevation. The other factors have relatively smaller values of weights implying that these factors have a minor role in groundwater occurrence conditions. The groundwater potential index (GPI) values for both models were classified using natural break classification scheme into five categories: very low, low, moderate, high, and very high. For validation of generated GPI, the relative operating characteristic (ROC) curves were used. According to the obtained area under the curve, the catastrophe model with 78 % prediction accuracy was found to perform better than entropy model with 77 % prediction accuracy. The overall results indicated that both models have good capability for predicting groundwater potential zones.

  1. Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve

    PubMed Central

    Fong, Youyi; Yin, Shuxin; Huang, Ying

    2016-01-01

    In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981

  2. Detailed gravity anomalies from GEOS-3 satellite altimetry data

    NASA Technical Reports Server (NTRS)

    Gopalapillai, G. S.; Mourad, A. G.

    1978-01-01

    A technique for deriving mean gravity anomalies from dense altimetry data was developed. A combination of both deterministic and statistical techniques was used. The basic mathematical model was based on the Stokes' equation which describes the analytical relationship between mean gravity anomalies and geoid undulations at a point; this undulation is a linear function of the altimetry data at that point. The overdetermined problem resulting from the excessive altimetry data available was solved using Least-Squares principles. These principles enable the simultaneous estimation of the associated standard deviations reflecting the internal consistency based on the accuracy estimates provided for the altimetry data as well as for the terrestrial anomaly data. Several test computations were made of the anomalies and their accuracy estimates using GOES-3 data.

  3. New Approach To Hour-By-Hour Weather Forecast

    NASA Astrophysics Data System (ADS)

    Liao, Q. Q.; Wang, B.

    2017-12-01

    Fine hourly forecast in single station weather forecast is required in many human production and life application situations. Most previous MOS (Model Output Statistics) which used a linear regression model are hard to solve nonlinear natures of the weather prediction and forecast accuracy has not been sufficient at high temporal resolution. This study is to predict the future meteorological elements including temperature, precipitation, relative humidity and wind speed in a local region over a relatively short period of time at hourly level. By means of hour-to-hour NWP (Numeral Weather Prediction)meteorological field from Forcastio (https://darksky.net/dev/docs/forecast) and real-time instrumental observation including 29 stations in Yunnan and 3 stations in Tianjin of China from June to October 2016, predictions are made of the 24-hour hour-by-hour ahead. This study presents an ensemble approach to combine the information of instrumental observation itself and NWP. Use autoregressive-moving-average (ARMA) model to predict future values of the observation time series. Put newest NWP products into the equations derived from the multiple linear regression MOS technique. Handle residual series of MOS outputs with autoregressive (AR) model for the linear property presented in time series. Due to the complexity of non-linear property of atmospheric flow, support vector machine (SVM) is also introduced . Therefore basic data quality control and cross validation makes it able to optimize the model function parameters , and do 24 hours ahead residual reduction with AR/SVM model. Results show that AR model technique is better than corresponding multi-variant MOS regression method especially at the early 4 hours when the predictor is temperature. MOS-AR combined model which is comparable to MOS-SVM model outperform than MOS. Both of their root mean square error and correlation coefficients for 2 m temperature are reduced to 1.6 degree Celsius and 0.91 respectively. The forecast accuracy of 24- hour forecast deviation no more than 2 degree Celsius is 78.75 % for MOS-AR model and 81.23 % for AR model.

  4. Overcoming learning barriers through knowledge management.

    PubMed

    Dror, Itiel E; Makany, Tamas; Kemp, Jonathan

    2011-02-01

    The ability to learn highly depends on how knowledge is managed. Specifically, different techniques for note-taking utilize different cognitive processes and strategies. In this paper, we compared dyslexic and control participants when using linear and non-linear note-taking. All our participants were professionals working in the banking and financial sector. We examined comprehension, accuracy, mental imagery & complexity, metacognition, and memory. We found that participants with dyslexia, when using a non-linear note-taking technique outperformed the control group using linear note-taking and matched the performance of the control group using non-linear note-taking. These findings emphasize how different knowledge management techniques can avoid some of the barriers to learners. Copyright © 2010 John Wiley & Sons, Ltd.

  5. Polarized radiation diagnostics of stellar magnetic fields

    NASA Astrophysics Data System (ADS)

    Mathys, Gautier

    The main techniques used to diagnose magnetic fields in stars from polarimetric observations are presented. First, a summary of the physics of spectral line formation in the presence of a magnetic field is given. Departures from the simple case of linear Zeeman effect are briefly considered: partial Paschen-Back effect, contribution of hyperfine structure, and combined Stark and Zeeman effects. Important approximate solutions of the equation of transfer of polarized light in spectral lines are introduced. The procedure for disk-integration of emergent Stokes profiles, which is central to stellar magnetic field studies, is described, with special attention to the treatment of stellar rotation. This formalism is used to discuss the determination of the mean longitudinal magnetic field (through the photographic technique and through Balmer line photopolarimetry). This is done within the specific framework of Ap stars, which, with their unique large-scale organized magnetic fields, are an ideal laboratory for studies of stellar magnetism. Special attention is paid to those Ap stars whose magnetically split line components are resolved in high-dispersion Stokes I spectra, and to the determination of their mean magnetic field modulus. Various techniques of exploitation of the information contained in polarized spectral line profiles are reviewed: the moment technique (in particular, the determination of the crossover and of the mean quadratic field), Zeeman-Doppler imaging, and least-squares deconvolution. The prospects that these methods open for linear polarization studies are sketched. The way in which linear polarization diagnostics complement their Stokes I and V counterparts is emphasized by consideration of the results of broad band linear polarization measurements. Illustrations of the use of various diagnostics to derive properties of the magnetic fields of Ap stars are given. This is used to show the interest of deriving more physically realistic models of the geometric structure of these fields. How this can possibly be achieved is briefly discussed. An overview of the current status of polarimetric studies of magnetic fields in non-degenerate stars of other types is presented. The final section is devoted to magnetic fields of white dwarfs. Current knowledge of magnetic fields of isolated white dwarfs is briefly reviewed. Diagnostic techniques are discussed, with particular emphasis on the variety of physical processes to be considered for understanding of spectral line formation over the broad range of magnetic field strengths encountered in these stars.

  6. Distributed optical fiber-based monitoring approach of spatial seepage behavior in dike engineering

    NASA Astrophysics Data System (ADS)

    Su, Huaizhi; Ou, Bin; Yang, Lifu; Wen, Zhiping

    2018-07-01

    The failure caused by seepage is the most common one in dike engineering. As to the characteristics of seepage in dike, such as longitudinal extension engineering, the randomness, strong concealment and small initial quantity order, by means of distributed fiber temperature sensor system (DTS), adopting an improved optical fiber layer layout scheme, the location of initial interpolation point of the saturation line is obtained. With the barycentric Lagrange interpolation collocation method (BLICM), the infiltrated surface of dike full-section is generated. Combined with linear optical fiber monitoring seepage method, BLICM is applied in an engineering case, which shows that a real-time seepage monitoring technique is presented in full-section of dike based on the combination method.

  7. Imparting Motion to a Test Object Such as a Motor Vehicle in a Controlled Fashion

    NASA Technical Reports Server (NTRS)

    Southward, Stephen C. (Inventor); Reubush, Chandler (Inventor); Pittman, Bryan (Inventor); Roehrig, Kurt (Inventor); Gerard, Doug (Inventor)

    2014-01-01

    An apparatus imparts motion to a test object such as a motor vehicle in a controlled fashion. A base has mounted on it a linear electromagnetic motor having a first end and a second end, the first end being connected to the base. A pneumatic cylinder and piston combination have a first end and a second end, the first end connected to the base so that the pneumatic cylinder and piston combination is generally parallel with the linear electromagnetic motor. The second ends of the linear electromagnetic motor and pneumatic cylinder and piston combination being commonly linked to a mount for the test object. A control system for the linear electromagnetic motor and pneumatic cylinder and piston combination drives the pneumatic cylinder and piston combination to support a substantial static load of the test object and the linear electromagnetic motor to impart controlled motion to the test object.

  8. Homomorphic filtering textural analysis technique to reduce multiplicative noise in the 11Oba nano-doped liquid crystalline compounds

    NASA Astrophysics Data System (ADS)

    Madhav, B. T. P.; Pardhasaradhi, P.; Manepalli, R. K. N. R.; Pisipati, V. G. K. M.

    2015-07-01

    The compound undecyloxy benzoic acid (11Oba) exhibits nematic and smectic-C phases while a nano-doped undecyloxy benzoic acid with ZnO exhibits the same nematic and smectic-C phases with reduced clearing temperature as expected. The doping is done with 0.5% and 1% ZnO molecules. The clearing temperatures are reduced by approximately 4 ° and 6 °, respectively (differential scanning calorimeter data). While collecting the images from a polarizing microscope connected with hot stage and camera, the illumination and reflectance combined multiplicatively and the image quality was reduced to identify the exact phase in the compound. A novel technique of homomorphic filtering is used in this manuscript through which multiplicative noise components of the image are separated linearly in the frequency domain. This technique provides a frequency domain procedure to improve the appearance of an image by gray level range compression and contrast enhancement.

  9. Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks.

    PubMed

    Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan

    2017-06-26

    Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H²RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H²RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller.

  10. The dynamics and control of large flexible space structures, 6

    NASA Technical Reports Server (NTRS)

    Bainum, P. M.

    1983-01-01

    The controls analysis based on a truncated finite element model of the 122m. Hoop/Column Antenna System focuses on an analysis of the controllability as well as the synthesis of control laws. Graph theoretic techniques are employed to consider controllability for different combinations of number and locations of actuators. Control law synthesis is based on an application of the linear regulator theory as well as pole placement techniques. Placement of an actuator on the hoop can result in a noticeable improvement in the transient characteristics. The problem of orientation and shape control of an orbiting flexible beam, previously examined, is now extended to include the influence of solar radiation environmental forces. For extremely flexible thin structures modification of control laws may be required and techniques for accomplishing this are explained. Effects of environmental torques are also included in previously developed models of orbiting flexible thin platforms.

  11. Technical note: Combining quantile forecasts and predictive distributions of streamflows

    NASA Astrophysics Data System (ADS)

    Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano

    2017-11-01

    The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.

  12. Inferring the most probable maps of underground utilities using Bayesian mapping model

    NASA Astrophysics Data System (ADS)

    Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony

    2018-03-01

    Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.

  13. Multidimensional radiative transfer with multilevel atoms. II. The non-linear multigrid method.

    NASA Astrophysics Data System (ADS)

    Fabiani Bendicho, P.; Trujillo Bueno, J.; Auer, L.

    1997-08-01

    A new iterative method for solving non-LTE multilevel radiative transfer (RT) problems in 1D, 2D or 3D geometries is presented. The scheme obtains the self-consistent solution of the kinetic and RT equations at the cost of only a few (<10) formal solutions of the RT equation. It combines, for the first time, non-linear multigrid iteration (Brandt, 1977, Math. Comp. 31, 333; Hackbush, 1985, Multi-Grid Methods and Applications, springer-Verlag, Berlin), an efficient multilevel RT scheme based on Gauss-Seidel iterations (cf. Trujillo Bueno & Fabiani Bendicho, 1995ApJ...455..646T), and accurate short-characteristics formal solution techniques. By combining a valid stopping criterion with a nested-grid strategy a converged solution with the desired true error is automatically guaranteed. Contrary to the current operator splitting methods the very high convergence speed of the new RT method does not deteriorate when the grid spatial resolution is increased. With this non-linear multigrid method non-LTE problems discretized on N grid points are solved in O(N) operations. The nested multigrid RT method presented here is, thus, particularly attractive in complicated multilevel transfer problems where small grid-sizes are required. The properties of the method are analyzed both analytically and with illustrative multilevel calculations for Ca II in 1D and 2D schematic model atmospheres.

  14. Linear and nonlinear subspace analysis of hand movements during grasping.

    PubMed

    Cui, Phil Hengjun; Visell, Yon

    2014-01-01

    This study investigated nonlinear patterns of coordination, or synergies, underlying whole-hand grasping kinematics. Prior research has shed considerable light on roles played by such coordinated degrees-of-freedom (DOF), illuminating how motor control is facilitated by structural and functional specializations in the brain, peripheral nervous system, and musculoskeletal system. However, existing analyses suppose that the patterns of coordination can be captured by means of linear analyses, as linear combinations of nominally independent DOF. In contrast, hand kinematics is itself highly nonlinear in nature. To address this discrepancy, we sought to to determine whether nonlinear synergies might serve to more accurately and efficiently explain human grasping kinematics than is possible with linear analyses. We analyzed motion capture data acquired from the hands of individuals as they grasped an array of common objects, using four of the most widely used linear and nonlinear dimensionality reduction algorithms. We compared the results using a recently developed algorithm-agnostic quality measure, which enabled us to assess the quality of the dimensional reductions that resulted by assessing the extent to which local neighborhood information in the data was preserved. Although qualitative inspection of this data suggested that nonlinear correlations between kinematic variables were present, we found that linear modeling, in the form of Principle Components Analysis, could perform better than any of the nonlinear techniques we applied.

  15. Linear Self-Referencing Techiques for Short-Optical-Pulse Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorrer, C.; Kang, I.

    2008-04-04

    Linear self-referencing techniques for the characterization of the electric field of short optical pulses are presented. The theoretical and practical advantages of these techniques are developed. Experimental implementations are described, and their performance is compared to the performance of their nonlinear counterparts. Linear techniques demonstrate unprecedented sensitivity and are a perfect fit in many domains where the precise, accurate measurement of the electric field of an optical pulse is required.

  16. Accuracy of 1H magnetic resonance spectroscopy for quantification of 2-hydroxyglutarate using linear combination and J-difference editing at 9.4T.

    PubMed

    Neuberger, Ulf; Kickingereder, Philipp; Helluy, Xavier; Fischer, Manuel; Bendszus, Martin; Heiland, Sabine

    2017-12-01

    Non-invasive detection of 2-hydroxyglutarate (2HG) by magnetic resonance spectroscopy is attractive since it is related to tumor metabolism. Here, we compare the detection accuracy of 2HG in a controlled phantom setting via widely used localized spectroscopy sequences quantified by linear combination of metabolite signals vs. a more complex approach applying a J-difference editing technique at 9.4T. Different phantoms, comprised out of a concentration series of 2HG and overlapping brain metabolites, were measured with an optimized point-resolved-spectroscopy sequence (PRESS) and an in-house developed J-difference editing sequence. The acquired spectra were post-processed with LCModel and a simulated metabolite set (PRESS) or with a quantification formula for J-difference editing. Linear regression analysis demonstrated a high correlation of real 2HG values with those measured with the PRESS method (adjusted R-squared: 0.700, p<0.001) as well as with those measured with the J-difference editing method (adjusted R-squared: 0.908, p<0.001). The regression model with the J-difference editing method however had a significantly higher explanatory value over the regression model with the PRESS method (p<0.0001). Moreover, with J-difference editing 2HG was discernible down to 1mM, whereas with the PRESS method 2HG values were not discernable below 2mM and with higher systematic errors, particularly in phantoms with high concentrations of N-acetyl-asparate (NAA) and glutamate (Glu). In summary, quantification of 2HG with linear combination of metabolite signals shows high systematic errors particularly at low 2HG concentration and high concentration of confounding metabolites such as NAA and Glu. In contrast, J-difference editing offers a more accurate quantification even at low 2HG concentrations, which outweighs the downsides of longer measurement time and more complex postprocessing. Copyright © 2017. Published by Elsevier GmbH.

  17. A modified anomaly detection method for capsule endoscopy images using non-linear color conversion and Higher-order Local Auto-Correlation (HLAC).

    PubMed

    Hu, Erzhong; Nosato, Hirokazu; Sakanashi, Hidenori; Murakawa, Masahiro

    2013-01-01

    Capsule endoscopy is a patient-friendly endoscopy broadly utilized in gastrointestinal examination. However, the efficacy of diagnosis is restricted by the large quantity of images. This paper presents a modified anomaly detection method, by which both known and unknown anomalies in capsule endoscopy images of small intestine are expected to be detected. To achieve this goal, this paper introduces feature extraction using a non-linear color conversion and Higher-order Local Auto Correlation (HLAC) Features, and makes use of image partition and subspace method for anomaly detection. Experiments are implemented among several major anomalies with combinations of proposed techniques. As the result, the proposed method achieved 91.7% and 100% detection accuracy for swelling and bleeding respectively, so that the effectiveness of proposed method is demonstrated.

  18. Fundamental Analysis of the Linear Multiple Regression Technique for Quantification of Water Quality Parameters from Remote Sensing Data. Ph.D. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H., III

    1977-01-01

    Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.

  19. Plunge waveforms from inspiralling binary black holes.

    PubMed

    Baker, J; Brügmann, B; Campanelli, M; Lousto, C O; Takahashi, R

    2001-09-17

    We study the coalescence of nonspinning binary black holes from near the innermost stable circular orbit down to the final single rotating black hole. We use a technique that combines the full numerical approach to solve the Einstein equations, applied in the truly nonlinear regime, and linearized perturbation theory around the final distorted single black hole at later times. We compute the plunge waveforms, which present a non-negligible signal lasting for t approximately 100M showing early nonlinear ringing, and we obtain estimates for the total gravitational energy and angular momentum radiated.

  20. Stress-intensity factors of r-cracks in fiber-reinforced composites under thermal and mechanical loading

    NASA Astrophysics Data System (ADS)

    Mueller, W. H.; Schmauder, S.

    1993-02-01

    The plane stress/plane strain problem of radial matrix cracking in fiber-reinforced composites, due to thermal mismatch and externally applied stress is solved numerically in the framework of linear elasticity, using Erdogan's integral equation technique. It is shown that, in order to obtain the results of the combined loading case, the solutions of purely thermal and purely mechanical loading can simply be superimposed. Stress-intensity factors are calculated for various lengths and distances of the crack from the interface for each of these loading conditions.

  1. Design of integrated pitch axis for autopilot/autothrottle and integrated lateral axis for autopilot/yaw damper for NASA TSRV airplane using integral LQG methodology

    NASA Technical Reports Server (NTRS)

    Kaminer, Isaac; Benson, Russell A.; Coleman, Edward E.; Ebrahimi, Yaghoob S.

    1990-01-01

    Two designs are presented for control systems for the NASA Transport System Research Vehicle (TSRV) using integral Linear Quadratic Gaussian (LQG) methodology. The first is an integrated longitudinal autopilot/autothrottle design and the second design is an integrated lateral autopilot/yaw damper/sideslip controller design. It is shown that a systematic top-down approach to a complex design problem combined with proper application of modern control synthesis techniques yields a satisfactory solution in a reasonable period of time.

  2. Coherent detection of frequency-hopped quadrature modulations in the presence of jamming. I - QPSK and QASK modulations

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Polydoros, A.

    1981-01-01

    This paper examines the performance of coherent QPSK and QASK systems combined with FH or FH/PN spread spectrum techniques in the presence of partial-band multitone or noise jamming. The worst-case jammer and worst-case performance are determined as functions of the signal-to-background noise ratio (SNR) and signal-to-jammer power ratio (SJR). Asymptotic results for high SNR are shown to have a linear dependence between the jammer's optimal power allocation and the system error probability performance.

  3. Field by field hybrid upwind splitting methods

    NASA Technical Reports Server (NTRS)

    Coquel, Frederic; Liou, Meng-Sing

    1993-01-01

    A new and general approach to upwind splitting is presented. The design principle combines the robustness of flux vector splitting schemes in the capture of nonlinear waves and the accuracy of some flux difference splitting schemes in the resolution of linear waves. The new schemes are derived following a general hybridization technique performed directly at the basic level of the field by field decomposition involved in FDS methods. The scheme does not use a spatial switch to be tuned up according to the local smoothness of the approximate solution.

  4. Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules

    1999-01-01

    In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.

  5. Frequency division multiplex technique

    NASA Technical Reports Server (NTRS)

    Brey, H. (Inventor)

    1973-01-01

    A system for monitoring a plurality of condition responsive devices is described. It consists of a master control station and a remote station. The master control station is capable of transmitting command signals which includes a parity signal to a remote station which transmits the signals back to the command station so that such can be compared with the original signals in order to determine if there are any transmission errors. The system utilizes frequency sources which are 1.21 multiples of each other so that no linear combination of any harmonics will interfere with another frequency.

  6. Anthraquinones quinizarin and danthron unwind negatively supercoiled DNA and lengthen linear DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verebová, Valéria; Adamcik, Jozef; Danko, Patrik

    2014-01-31

    Highlights: • Anthraquinones quinizarin and danthron unwind negatively supercoiled DNA. • Anthraquinones quinizarin and danthron lengthen linear DNA. • Anthraquinones quinizarin and danthron possess middle binding affinity to DNA. • Anthraquinones quinizarin and danthron interact with DNA by intercalating mode. - Abstract: The intercalating drugs possess a planar aromatic chromophore unit by which they insert between DNA bases causing the distortion of classical B-DNA form. The planar tricyclic structure of anthraquinones belongs to the group of chromophore units and enables anthraquinones to bind to DNA by intercalating mode. The interactions of simple derivatives of anthraquinone, quinizarin (1,4-dihydroxyanthraquinone) and danthron (1,8-dihydroxyanthraquinone),more » with negatively supercoiled and linear DNA were investigated using a combination of the electrophoretic methods, fluorescence spectrophotometry and single molecule technique an atomic force microscopy. The detection of the topological change of negatively supercoiled plasmid DNA, unwinding of negatively supercoiled DNA, corresponding to appearance of DNA topoisomers with the low superhelicity and an increase of the contour length of linear DNA in the presence of quinizarin and danthron indicate the binding of both anthraquinones to DNA by intercalating mode.« less

  7. A mechanical comparison of linear and double-looped hung supplemental heavy chain resistance to the back squat: a case study.

    PubMed

    Neelly, Kurt R; Terry, Joseph G; Morris, Martin J

    2010-01-01

    A relatively new and scarcely researched technique to increase strength is the use of supplemental heavy chain resistance (SHCR) in conjunction with plate weights to provide variable resistance to free weight exercises. The purpose of this case study was to determine the actual resistance being provided by a double-looped versus a linear hung SHCR to the back squat exercise. The linear technique simply hangs the chain directly from the bar, whereas the double-looped technique uses a smaller chain to adjust the height of the looped chain. In both techniques, as the squat descends, chain weight is unloaded onto the floor, and as the squat ascends, chain weight is progressively loaded back as resistance. One experienced and trained male weight lifter (age = 33 yr; height = 1.83 m; weight = 111.4 kg) served as the subject. Plate weight was set at 84.1 kg, approximately 50% of the subject's 1 repetition maximum. The SHCR was affixed to load cells, sampling at a frequency of 500 Hz, which were affixed to the Olympic bar. Data were collected as the subject completed the back squat under the following conditions: double-looped 1 chain (9.6 kg), double-looped 2 chains (19.2 kg), linear 1 chain, and linear 2 chains. The double-looped SHCR resulted in a 78-89% unloading of the chain weight at the bottom of the squat, whereas the linear hanging SHCR resulted in only a 36-42% unloading. The double-looped technique provided nearly 2 times the variable resistance at the top of the squat compared with the linear hanging technique, showing that attention must be given to the technique used to hang SHCR.

  8. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    PubMed

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Model-based damage evaluation of layered CFRP structures

    NASA Astrophysics Data System (ADS)

    Munoz, Rafael; Bochud, Nicolas; Rus, Guillermo; Peralta, Laura; Melchor, Juan; Chiachío, Juan; Chiachío, Manuel; Bond, Leonard J.

    2015-03-01

    An ultrasonic evaluation technique for damage identification of layered CFRP structures is presented. This approach relies on a model-based estimation procedure that combines experimental data and simulation of ultrasonic damage-propagation interactions. The CFPR structure, a [0/90]4s lay-up, has been tested in an immersion through transmission experiment, where a scan has been performed on a damaged specimen. Most ultrasonic techniques in industrial practice consider only a few features of the received signals, namely, time of flight, amplitude, attenuation, frequency contents, and so forth. In this case, once signals are captured, an algorithm is used to reconstruct the complete signal waveform and extract the unknown damage parameters by means of modeling procedures. A linear version of the data processing has been performed, where only Young modulus has been monitored and, in a second nonlinear version, the first order nonlinear coefficient β was incorporated to test the possibility of detection of early damage. The aforementioned physical simulation models are solved by the Transfer Matrix formalism, which has been extended from linear to nonlinear harmonic generation technique. The damage parameter search strategy is based on minimizing the mismatch between the captured and simulated signals in the time domain in an automated way using Genetic Algorithms. Processing all scanned locations, a C-scan of the parameter of each layer can be reconstructed, obtaining the information describing the state of each layer and each interface. Damage can be located and quantified in terms of changes in the selected parameter with a measurable extension. In the case of the nonlinear coefficient of first order, evidence of higher sensitivity to damage than imaging the linearly estimated Young Modulus is provided.

  10. Derivative information recovery by a selective integration technique

    NASA Technical Reports Server (NTRS)

    Johnson, M. A.

    1974-01-01

    A nonlinear stationary homogeneous digital filter DIRSIT (derivative information recovery by a selective integration technique) is investigated. The spectrum of a quasi-linear discrete describing function (DDF) to DIRSIT is obtained by a digital measuring scheme. A finite impulse response (FIR) approximation to the quasi-linearization is then obtained. Finally, DIRSIT is compared with its quasi-linear approximation and with a standard digital differentiating technique. Results indicate the effects of DIRSIT on a wide variety of practical signals.

  11. Flexible multibody simulation of automotive systems with non-modal model reduction techniques

    NASA Astrophysics Data System (ADS)

    Shiiba, Taichi; Fehr, Jörg; Eberhard, Peter

    2012-12-01

    The stiffness of the body structure of an automobile has a strong relationship with its noise, vibration, and harshness (NVH) characteristics. In this paper, the effect of the stiffness of the body structure upon ride quality is discussed with flexible multibody dynamics. In flexible multibody simulation, the local elastic deformation of the vehicle has been described traditionally with modal shape functions. Recently, linear model reduction techniques from system dynamics and mathematics came into the focus to find more sophisticated elastic shape functions. In this work, the NVH-relevant states of a racing kart are simulated, whereas the elastic shape functions are calculated with modern model reduction techniques like moment matching by projection on Krylov-subspaces, singular value decomposition-based reduction techniques, and combinations of those. The whole elastic multibody vehicle model consisting of tyres, steering, axle, etc. is considered, and an excitation with a vibration characteristics in a wide frequency range is evaluated in this paper. The accuracy and the calculation performance of those modern model reduction techniques is investigated including a comparison of the modal reduction approach.

  12. Optimization techniques for integrating spatial data

    USGS Publications Warehouse

    Herzfeld, U.C.; Merriam, D.F.

    1995-01-01

    Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.

  13. Linearization of digital derived rate algorithm for use in linear stability analysis

    NASA Technical Reports Server (NTRS)

    Graham, R. E.; Porada, T. W.

    1985-01-01

    The digital derived rate (DDR) algorithm is used to calculate the rate of rotation of the Centaur upper-stage rocket. The DDR is highly nonlinear algorithm, and classical linear stability analysis of the spacecraft cannot be performed without linearization. The performance of this rate algorithm is characterized by a gain and phase curve that drop off at the same frequency. This characteristic is desirable for many applications. A linearization technique for the DDR algorithm is investigated. The linearization method is described. Examples of the results of the linearization technique are illustrated, and the effects of linearization are described. A linear digital filter may be used as a substitute for performing classical linear stability analyses, while the DDR itself may be used in time response analysis.

  14. Description of a computer program and numerical techniques for developing linear perturbation models from nonlinear systems simulations

    NASA Technical Reports Server (NTRS)

    Dieudonne, J. E.

    1978-01-01

    A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.

  15. Using crosscorrelation techniques to determine the impulse response of linear systems

    NASA Technical Reports Server (NTRS)

    Dallabetta, Michael J.; Li, Harry W.; Demuth, Howard B.

    1993-01-01

    A crosscorrelation method of measuring the impulse response of linear systems is presented. The technique, implementation, and limitations of this method are discussed. A simple system is designed and built using discrete components and the impulse response of a linear circuit is measured. Theoretical and software simulation results are presented.

  16. Combination of dynamic Bayesian network classifiers for the recognition of degraded characters

    NASA Astrophysics Data System (ADS)

    Likforman-Sulem, Laurence; Sigelle, Marc

    2009-01-01

    We investigate in this paper the combination of DBN (Dynamic Bayesian Network) classifiers, either independent or coupled, for the recognition of degraded characters. The independent classifiers are a vertical HMM and a horizontal HMM whose observable outputs are the image columns and the image rows respectively. The coupled classifiers, presented in a previous study, associate the vertical and horizontal observation streams into single DBNs. The scores of the independent and coupled classifiers are then combined linearly at the decision level. We compare the different classifiers -independent, coupled or linearly combined- on two tasks: the recognition of artificially degraded handwritten digits and the recognition of real degraded old printed characters. Our results show that coupled DBNs perform better on degraded characters than the linear combination of independent HMM scores. Our results also show that the best classifier is obtained by linearly combining the scores of the best coupled DBN and the best independent HMM.

  17. Combined linear theory/impact theory method for analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1980-01-01

    Pressure distributions on a wing body at Mach 4.63 are calculated. The combined theory is shown to give improved predictions over either linear theory or impact theory alone. The combined theory is also applied in the inverse design mode to calculate optimum camber slopes at Mach 4.63. Comparisons with optimum camber slopes obtained from unmodified linear theory show large differences. Analysis of the results indicate that the combined theory correctly predicts the effect of thickness on the loading distributions at high Mach numbers, and that finite thickness wings optimized at high Mach numbers using unmodified linear theory will not achieve the minimum drag characteristics for which they are designed.

  18. Broadband linearisation of high-efficiency power amplifiers

    NASA Technical Reports Server (NTRS)

    Kenington, Peter B.; Parsons, Kieran J.; Bennett, David W.

    1993-01-01

    A feedforward-based amplifier linearization technique is presented which is capable of yielding significant improvements in both linearity and power efficiency over conventional amplifier classes (e.g. class-A or class-AB). Theoretical and practical results are presented showing that class-C stages may be used for both the main and error amplifiers yielding practical efficiencies well in excess of 30 percent, with theoretical efficiencies of much greater than 40 percent being possible. The levels of linearity which may be achieved are required for most satellite systems, however if greater linearity is required, the technique may be used in addition to conventional pre-distortion techniques.

  19. Rumen microbial protein synthesis and nitrogen efficiency as affected by tanniferous and non-tanniferous forage legumes incubated individually or together in Rumen Simulation Technique.

    PubMed

    Grosse Brinkhaus, Anja; Bee, Giuseppe; Schwarm, Angela; Kreuzer, Michael; Dohme-Meier, Frigga; Zeitz, Johanna O

    2018-03-01

    A limited availability of microbial protein can impair productivity in ruminants. Ruminal nitrogen efficiency might be optimised by combining high-quality forage legumes such as red clover (RC), which has unfavourably high ruminal protein degradability, with tanniferous legumes like sainfoin (SF) and birdsfoot trefoil (BT). Silages from SF and from BT cultivars [Bull (BB) and Polom (BP)] were incubated singly or in combination with RC using the Rumen Simulation Technique (n = 6). The tanniferous legumes, when compared to RC, changed the total short-chain fatty acid profile by increasing propionate proportions at the expense of butyrate. Silage from SF contained the most condensed tannins (CTs) (136 g CT kg -1 dry matter) and clearly differed in various traits from the BT and RC silages. The apparent nutrient degradability (small with SF), microbial protein synthesis, and calculated content of potentially utilisable crude protein (large with SF) indicated that SF had the greatest efficiency in ruminal protein synthesis. The effects of combining SF with RC were mostly linear. The potential of sainfoin to improve protein supply, demonstrated either individually or in combination with a high-performance forage legume, indicates its potential usefulness in complementing protein-deficient ruminant diets and high-quality forages rich in rumen-degradable protein. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  20. The pre-image problem for Laplacian Eigenmaps utilizing L 1 regularization with applications to data fusion

    NASA Astrophysics Data System (ADS)

    Cloninger, Alexander; Czaja, Wojciech; Doster, Timothy

    2017-07-01

    As the popularity of non-linear manifold learning techniques such as kernel PCA and Laplacian Eigenmaps grows, vast improvements have been seen in many areas of data processing, including heterogeneous data fusion and integration. One problem with the non-linear techniques, however, is the lack of an easily calculable pre-image. Existence of such pre-image would allow visualization of the fused data not only in the embedded space, but also in the original data space. The ability to make such comparisons can be crucial for data analysts and other subject matter experts who are the end users of novel mathematical algorithms. In this paper, we propose a pre-image algorithm for Laplacian Eigenmaps. Our method offers major improvements over existing techniques, which allow us to address the problem of noisy inputs and the issue of how to calculate the pre-image of a point outside the convex hull of training samples; both of which have been overlooked in previous studies in this field. We conclude by showing that our pre-image algorithm, combined with feature space rotations, allows us to recover occluded pixels of an imaging modality based off knowledge of that image measured by heterogeneous modalities. We demonstrate this data recovery on heterogeneous hyperspectral (HS) cameras, as well as by recovering LIDAR measurements from HS data.

  1. Data Transfer for Multiple Sensor Networks Over a Broad Temperature Range

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael

    2013-01-01

    At extreme temperatures, cryogenic and over 300 C, few electronic components are available to support intelligent data transfer over a common, linear combining medium. This innovation allows many sensors to operate on the same wire bus (or on the same airwaves or optical channel: any linearly combining medium), transmitting simultaneously, but individually recoverable at a node in a cooler part of the test area. This innovation has been demonstrated using room-temperature silicon microcircuits as proxy. The microcircuits have analog functionality comparable to componentry designed using silicon carbide. Given a common, linearly combining medium, multiple sending units may transmit information simultaneously. A listening node, using various techniques, can pick out the signal from a single sender, if it has unique qualities, e.g. a voice. The problem being solved is commonly referred to as the cocktail party problem. The human brain uses the cocktail party effect when it is able to recognize and follow a single conversation in a party full of talkers and other noise sources. High-temperature sensors have been used in silicon carbide electronic oscillator circuits. The frequency of the oscillator changes as a function of the changes in the sensed parameter, such as pressure. This change is analogous to changes in the pitch of a person s voice. The output of this oscillator and many others may be superimposed onto a single medium. This medium may be the power lines supplying current to the sensors, a third wire dedicated to data transmission, the airwaves through radio transmission, an optical medium, etc. However, with nothing to distinguish the identities of each source that is, the source separation this system is useless. Using digital electronic functions, unique codes or patterns are created and used to modulate the output of the sensor.

  2. An intuitionistic fuzzy multi-objective non-linear programming model for sustainable irrigation water allocation under the combination of dry and wet conditions

    NASA Astrophysics Data System (ADS)

    Li, Mo; Fu, Qiang; Singh, Vijay P.; Ma, Mingwei; Liu, Xiao

    2017-12-01

    Water scarcity causes conflicts among natural resources, society and economy and reinforces the need for optimal allocation of irrigation water resources in a sustainable way. Uncertainties caused by natural conditions and human activities make optimal allocation more complex. An intuitionistic fuzzy multi-objective non-linear programming (IFMONLP) model for irrigation water allocation under the combination of dry and wet conditions is developed to help decision makers mitigate water scarcity. The model is capable of quantitatively solving multiple problems including crop yield increase, blue water saving, and water supply cost reduction to obtain a balanced water allocation scheme using a multi-objective non-linear programming technique. Moreover, it can deal with uncertainty as well as hesitation based on the introduction of intuitionistic fuzzy numbers. Consideration of the combination of dry and wet conditions for water availability and precipitation makes it possible to gain insights into the various irrigation water allocations, and joint probabilities based on copula functions provide decision makers an average standard for irrigation. A case study on optimally allocating both surface water and groundwater to different growth periods of rice in different subareas in Heping irrigation area, Qing'an County, northeast China shows the potential and applicability of the developed model. Results show that the crop yield increase target especially in tillering and elongation stages is a prevailing concern when more water is available, and trading schemes can mitigate water supply cost and save water with an increased grain output. Results also reveal that the water allocation schemes are sensitive to the variation of water availability and precipitation with uncertain characteristics. The IFMONLP model is applicable for most irrigation areas with limited water supplies to determine irrigation water strategies under a fuzzy environment.

  3. DUAL STATE-PARAMETER UPDATING SCHEME ON A CONCEPTUAL HYDROLOGIC MODEL USING SEQUENTIAL MONTE CARLO FILTERS

    NASA Astrophysics Data System (ADS)

    Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin

    Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.

  4. Experimental Study on Rebar Corrosion Using the Galvanic Sensor Combined with the Electronic Resistance Technique

    PubMed Central

    Xu, Yunze; Li, Kaiqiang; Liu, Liang; Yang, Lujia; Wang, Xiaona; Huang, Yi

    2016-01-01

    In this paper, a new kind of carbon steel (CS) and stainless steel (SS) galvanic sensor system was developed for the study of rebar corrosion in different pore solution conditions. Through the special design of the CS and SS electronic coupons, the electronic resistance (ER) method and zero resistance ammeter (ZRA) technique were used simultaneously for the measurement of both the galvanic current and the corrosion depth. The corrosion processes in different solution conditions were also studied by linear polarization resistance (LPR) and the measurements of polarization curves. The test result shows that the galvanic current noise can provide detailed information of the corrosion processes. When localized corrosion occurs, the corrosion rate measured by the ER method is lower than the real corrosion rate. However, the value measured by the LPR method is higher than the real corrosion rate. The galvanic current and the corrosion current measured by the LPR method shows linear correlation in chloride-containing saturated Ca(OH)2 solution. The relationship between the corrosion current differences measured by the CS electronic coupons and the galvanic current between the CS and SS electronic coupons can also be used to evaluate the localized corrosion in reinforced concrete. PMID:27618054

  5. Experimental Study on Rebar Corrosion Using the Galvanic Sensor Combined with the Electronic Resistance Technique.

    PubMed

    Xu, Yunze; Li, Kaiqiang; Liu, Liang; Yang, Lujia; Wang, Xiaona; Huang, Yi

    2016-09-08

    In this paper, a new kind of carbon steel (CS) and stainless steel (SS) galvanic sensor system was developed for the study of rebar corrosion in different pore solution conditions. Through the special design of the CS and SS electronic coupons, the electronic resistance (ER) method and zero resistance ammeter (ZRA) technique were used simultaneously for the measurement of both the galvanic current and the corrosion depth. The corrosion processes in different solution conditions were also studied by linear polarization resistance (LPR) and the measurements of polarization curves. The test result shows that the galvanic current noise can provide detailed information of the corrosion processes. When localized corrosion occurs, the corrosion rate measured by the ER method is lower than the real corrosion rate. However, the value measured by the LPR method is higher than the real corrosion rate. The galvanic current and the corrosion current measured by the LPR method shows linear correlation in chloride-containing saturated Ca(OH)₂ solution. The relationship between the corrosion current differences measured by the CS electronic coupons and the galvanic current between the CS and SS electronic coupons can also be used to evaluate the localized corrosion in reinforced concrete.

  6. Heat and mass transfer in MHD free convection from a moving permeable vertical surface by a perturbation technique

    NASA Astrophysics Data System (ADS)

    Abdelkhalek, M. M.

    2009-05-01

    Numerical results are presented for heat and mass transfer effect on hydromagnetic flow of a moving permeable vertical surface. An analysis is performed to study the momentum, heat and mass transfer characteristics of MHD natural convection flow over a moving permeable surface. The surface is maintained at linear temperature and concentration variations. The non-linear coupled boundary layer equations were transformed and the resulting ordinary differential equations were solved by perturbation technique [Aziz A, Na TY. Perturbation methods in heat transfer. Berlin: Springer-Verlag; 1984. p. 1-184; Kennet Cramer R, Shih-I Pai. Magneto fluid dynamics for engineers and applied physicists 1973;166-7]. The solution is found to be dependent on several governing parameter, including the magnetic field strength parameter, Prandtl number, Schmidt number, buoyancy ratio and suction/blowing parameter, a parametric study of all the governing parameters is carried out and representative results are illustrated to reveal a typical tendency of the solutions. Numerical results for the dimensionless velocity profiles, the temperature profiles, the concentration profiles, the local friction coefficient and the local Nusselt number are presented for various combinations of parameters.

  7. The influence of and the identification of nonlinearity in flexible structures

    NASA Technical Reports Server (NTRS)

    Zavodney, Lawrence D.

    1988-01-01

    Several models were built at NASA Langley and used to demonstrate the following nonlinear behavior: internal resonance in a free response, principal parametric resonance and subcritical instability in a cantilever beam-lumped mass structure, combination resonance in a parametrically excited flexible beam, autoparametric interaction in a two-degree-of-freedom system, instability of the linear solution, saturation of the excited mode, subharmonic bifurcation, and chaotic responses. A video tape documenting these phenomena was made. An attempt to identify a simple structure consisting of two light-weight beams and two lumped masses using the Eigensystem Realization Algorithm showed the inherent difficulty of using a linear based theory to identify a particular nonlinearity. Preliminary results show the technique requires novel interpretation, and hence may not be useful for structural modes that are coupled by a guadratic nonlinearity. A literature survey was also completed on recent work in parametrically excited nonlinear system. In summary, nonlinear systems may possess unique behaviors that require nonlinear identification techniques based on an understanding of how nonlinearity affects the dynamic response of structures. In this was, the unique behaviors of nonlinear systems may be properly identified. Moreover, more accutate quantifiable estimates can be made once the qualitative model has been determined.

  8. Rapid Analysis of Carbohydrates in Bioprocess Samples: An Evaluation of the CarboPac SA10 for HPAE-PAD Analysis by Interlaboratory Comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sevcik, R. S.; Hyman, D. A.; Basumallich, L.

    2013-01-01

    A technique for carbohydrate analysis for bioprocess samples has been developed, providing reduced analysis time compared to current practice in the biofuels R&D community. The Thermofisher CarboPac SA10 anion-exchange column enables isocratic separation of monosaccharides, sucrose and cellobiose in approximately 7 minutes. Additionally, use of a low-volume (0.2 mL) injection valve in combination with a high-volume detection cell minimizes the extent of sample dilution required to bring sugar concentrations into the linear range of the pulsed amperometric detector (PAD). Three laboratories, representing academia, industry, and government, participated in an interlaboratory study which analyzed twenty-one opportunistic samples representing biomass pretreatment, enzymaticmore » saccharification, and fermentation samples. The technique's robustness, linearity, and interlaboratory reproducibility were evaluated and showed excellent-to-acceptable characteristics. Additionally, quantitation by the CarboPac SA10/PAD was compared with the current practice method utilizing a HPX-87P/RID. While these two methods showed good agreement a statistical comparison found significant quantitation difference between them, highlighting the difference between selective and universal detection modes.« less

  9. Complex Archaeological Prospection Using Combination of Non-destructive Techniques

    NASA Astrophysics Data System (ADS)

    Faltýnová, M.; Pavelka, K.; Nový, P.; Šedina, J.

    2015-08-01

    This article describes the use of a combination of non-destructive techniques for the complex documentation of a fabulous historical site called Devil's Furrow, an unusual linear formation lying in the landscape of central Bohemia. In spite of many efforts towards interpretation of the formation, its original form and purpose have not yet been explained in a satisfactory manner. The study focuses on the northern part of the furrow which appears to be a dissimilar element within the scope of the whole Devil's Furrow. This article presents detailed description of relics of the formation based on historical map searches and modern investigation methods including airborne laser scanning, aerial photogrammetry (based on airplane and RPAS) and ground-penetrating radar. Airborne laser scanning data and aerial orthoimages acquired by the Czech Office for Surveying, Mapping and Cadastre were used. Other measurements were conducted by our laboratory. Data acquired by various methods provide sufficient information to determine the probable original shape of the formation and proves explicitly the anthropological origin of the northern part of the formation (around village Lipany).

  10. Dust remobilization in fusion plasmas under steady state conditions

    NASA Astrophysics Data System (ADS)

    Tolias, P.; Ratynskaia, S.; De Angeli, M.; De Temmerman, G.; Ripamonti, D.; Riva, G.; Bykov, I.; Shalpegin, A.; Vignitchouk, L.; Brochard, F.; Bystrov, K.; Bardin, S.; Litnovsky, A.

    2016-02-01

    The first combined experimental and theoretical studies of dust remobilization by plasma forces are reported. The main theoretical aspects of remobilization in fusion devices under steady state conditions are analyzed. In particular, the dominant role of adhesive forces is highlighted and generic remobilization conditions—direct lift-up, sliding, rolling—are formulated. A novel experimental technique is proposed, based on controlled adhesion of dust grains on tungsten samples combined with detailed mapping of the dust deposition profile prior and post plasma exposure. Proof-of-principle experiments in the TEXTOR tokamak and the EXTRAP-T2R reversed-field pinch are presented. The versatile environment of the linear device Pilot-PSI allowed for experiments with different magnetic field topologies and varying plasma conditions that were complemented with camera observations.

  11. Lie integrable cases of the simplified multistrain/two-stream model for tuberculosis and dengue fever

    NASA Astrophysics Data System (ADS)

    Nucci, M. C.; Leach, P. G. L.

    2007-09-01

    We apply the techniques of Lie's symmetry analysis to a caricature of the simplified multistrain model of Castillo-Chavez and Feng [C. Castillo-Chavez, Z. Feng, To treat or not to treat: The case of tuberculosis, J. Math. Biol. 35 (1997) 629-656] for the transmission of tuberculosis and the coupled two-stream vector-based model of Feng and Velasco-Hernandez [Z. Feng, J.X. Velasco-Hernandez, Competitive exclusion in a vector-host model for the dengue fever, J. Math. Biol. 35 (1997) 523-544] to identify the combinations of parameters which lead to the existence of nontrivial symmetries. In particular we identify those combinations which lead to the possibility of the linearization of the system and provide the corresponding solutions. Many instances of additional symmetry are analyzed.

  12. Combined mine tremors source location and error evaluation in the Lubin Copper Mine (Poland)

    NASA Astrophysics Data System (ADS)

    Leśniak, Andrzej; Pszczoła, Grzegorz

    2008-08-01

    A modified method of mine tremors location used in Lubin Copper Mine is presented in the paper. In mines where an intensive exploration is carried out a high accuracy source location technique is usually required. The effect of the flatness of the geophones array, complex geological structure of the rock mass and intense exploitation make the location results ambiguous in such mines. In the present paper an effective method of source location and location's error evaluations are presented, combining data from two different arrays of geophones. The first consists of uniaxial geophones spaced in the whole mine area. The second is installed in one of the mining panels and consists of triaxial geophones. The usage of the data obtained from triaxial geophones allows to increase the hypocenter vertical coordinate precision. The presented two-step location procedure combines standard location methods: P-waves directions and P-waves arrival times. Using computer simulations the efficiency of the created algorithm was tested. The designed algorithm is fully non-linear and was tested on the multilayered rock mass model of the Lubin Copper Mine, showing a computational better efficiency than the traditional P-wave arrival times location algorithm. In this paper we present the complete procedure that effectively solves the non-linear location problems, i.e. the mine tremor location and measurement of the error propagation.

  13. TIME CALIBRATED OSCILLOSCOPE SWEEP

    DOEpatents

    Owren, H.M.; Johnson, B.M.; Smith, V.L.

    1958-04-22

    The time calibrator of an electric signal displayed on an oscilloscope is described. In contrast to the conventional technique of using time-calibrated divisions on the face of the oscilloscope, this invention provides means for directly superimposing equal time spaced markers upon a signal displayed upon an oscilloscope. More explicitly, the present invention includes generally a generator for developing a linear saw-tooth voltage and a circuit for combining a high-frequency sinusoidal voltage of a suitable amplitude and frequency with the saw-tooth voltage to produce a resultant sweep deflection voltage having a wave shape which is substantially linear with respect to time between equal time spaced incremental plateau regions occurring once each cycle of the sinusoidal voltage. The foregoing sweep voltage when applied to the horizontal deflection plates in combination with a signal to be observed applied to the vertical deflection plates of a cathode ray oscilloscope produces an image on the viewing screen which is essentially a display of the signal to be observed with respect to time. Intensified spots, or certain other conspicuous indications corresponding to the equal time spaced plateau regions of said sweep voltage, appear superimposed upon said displayed signal, which indications are therefore suitable for direct time calibration purposes.

  14. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  15. Super-resolution fluorescence microscopy by stepwise optical saturation

    PubMed Central

    Zhang, Yide; Nallathamby, Prakash D.; Vigil, Genevieve D.; Khan, Aamir A.; Mason, Devon E.; Boerckel, Joel D.; Roeder, Ryan K.; Howard, Scott S.

    2018-01-01

    Super-resolution fluorescence microscopy is an important tool in biomedical research for its ability to discern features smaller than the diffraction limit. However, due to its difficult implementation and high cost, the super-resolution microscopy is not feasible in many applications. In this paper, we propose and demonstrate a saturation-based super-resolution fluorescence microscopy technique that can be easily implemented and requires neither additional hardware nor complex post-processing. The method is based on the principle of stepwise optical saturation (SOS), where M steps of raw fluorescence images are linearly combined to generate an image with a M-fold increase in resolution compared with conventional diffraction-limited images. For example, linearly combining (scaling and subtracting) two images obtained at regular powers extends the resolution by a factor of 1.4 beyond the diffraction limit. The resolution improvement in SOS microscopy is theoretically infinite but practically is limited by the signal-to-noise ratio. We perform simulations and experimentally demonstrate super-resolution microscopy with both one-photon (confocal) and multiphoton excitation fluorescence. We show that with the multiphoton modality, the SOS microscopy can provide super-resolution imaging deep in scattering samples. PMID:29675306

  16. MEG and fMRI Fusion for Non-Linear Estimation of Neural and BOLD Signal Changes

    PubMed Central

    Plis, Sergey M.; Calhoun, Vince D.; Weisend, Michael P.; Eichele, Tom; Lane, Terran

    2010-01-01

    The combined analysis of magnetoencephalography (MEG)/electroencephalography and functional magnetic resonance imaging (fMRI) measurements can lead to improvement in the description of the dynamical and spatial properties of brain activity. In this paper we empirically demonstrate this improvement using simulated and recorded task related MEG and fMRI activity. Neural activity estimates were derived using a dynamic Bayesian network with continuous real valued parameters by means of a sequential Monte Carlo technique. In synthetic data, we show that MEG and fMRI fusion improves estimation of the indirectly observed neural activity and smooths tracking of the blood oxygenation level dependent (BOLD) response. In recordings of task related neural activity the combination of MEG and fMRI produces a result with greater signal-to-noise ratio, that confirms the expectation arising from the nature of the experiment. The highly non-linear model of the BOLD response poses a difficult inference problem for neural activity estimation; computational requirements are also high due to the time and space complexity. We show that joint analysis of the data improves the system's behavior by stabilizing the differential equations system and by requiring fewer computational resources. PMID:21120141

  17. An approach to localize the retinal blood vessels using bit planes and centerline detection.

    PubMed

    Fraz, M M; Barman, S A; Remagnino, P; Hoppe, A; Basit, A; Uyyanonvara, B; Rudnicka, A R; Owen, C G

    2012-11-01

    The change in morphology, diameter, branching pattern or tortuosity of retinal blood vessels is an important indicator of various clinical disorders of the eye and the body. This paper reports an automated method for segmentation of blood vessels in retinal images. A unique combination of techniques for vessel centerlines detection and morphological bit plane slicing is presented to extract the blood vessel tree from the retinal images. The centerlines are extracted by using the first order derivative of a Gaussian filter in four orientations and then evaluation of derivative signs and average derivative values is performed. Mathematical morphology has emerged as a proficient technique for quantifying the blood vessels in the retina. The shape and orientation map of blood vessels is obtained by applying a multidirectional morphological top-hat operator with a linear structuring element followed by bit plane slicing of the vessel enhanced grayscale image. The centerlines are combined with these maps to obtain the segmented vessel tree. The methodology is tested on three publicly available databases DRIVE, STARE and MESSIDOR. The results demonstrate that the performance of the proposed algorithm is comparable with state of the art techniques in terms of accuracy, sensitivity and specificity. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. In vivo measurements of cutaneous melanin across spatial scales: using multiphoton microscopy and spatial frequency domain spectroscopy.

    PubMed

    Saager, Rolf B; Balu, Mihaela; Crosignani, Viera; Sharif, Ata; Durkin, Anthony J; Kelly, Kristen M; Tromberg, Bruce J

    2015-06-01

    The combined use of nonlinear optical microscopy and broadband reflectance techniques to assess melanin concentration and distribution thickness in vivo over the full range of Fitzpatrick skin types is presented. Twelve patients were measured using multiphoton microscopy (MPM) and spatial frequency domain spectroscopy (SFDS) on both dorsal forearm and volar arm, which are generally sun-exposed and non-sun-exposed areas, respectively. Both MPM and SFDS measured melanin volume fractions between (skin type I non-sun-exposed) and 20% (skin type VI sun exposed). MPM measured epidermal (anatomical) thickness values ~30-65 μm, while SFDS measured melanin distribution thickness based on diffuse optical path length. There was a strong correlation between melanin concentration and melanin distribution (epidermal) thickness measurements obtained using the two techniques. While SFDS does not have the ability to match the spatial resolution of MPM, this study demonstrates that melanin content as quantified using SFDS is linearly correlated with epidermal melanin as measured using MPM (R² = 0.8895). SFDS melanin distribution thickness is correlated to MPM values (R² = 0.8131). These techniques can be used individually and/or in combination to advance our understanding and guide therapies for pigmentation-related conditions as well as light-based treatments across a full range of skin types.

  19. Frequency domain system identification of helicopter rotor dynamics incorporating models with time periodic coefficients

    NASA Astrophysics Data System (ADS)

    Hwang, Sunghwan

    1997-08-01

    One of the most prominent features of helicopter rotor dynamics in forward flight is the periodic coefficients in the equations of motion introduced by the rotor rotation. The frequency response characteristics of such a linear time periodic system exhibits sideband behavior, which is not the case for linear time invariant systems. Therefore, a frequency domain identification methodology for linear systems with time periodic coefficients was developed, because the linear time invariant theory cannot account for sideband behavior. The modulated complex Fourier series was introduced to eliminate the smearing effect of Fourier series expansions of exponentially modulated periodic signals. A system identification theory was then developed using modulated complex Fourier series expansion. Correlation and spectral density functions were derived using the modulated complex Fourier series expansion for linear time periodic systems. Expressions of the identified harmonic transfer function were then formulated using the spectral density functions both with and without additive noise processes at input and/or output. A procedure was developed to identify parameters of a model to match the frequency response characteristics between measured and estimated harmonic transfer functions by minimizing an objective function defined in terms of the trace of the squared frequency response error matrix. Feasibility was demonstrated by the identification of the harmonic transfer function and parameters for helicopter rigid blade flapping dynamics in forward flight. This technique is envisioned to satisfy the needs of system identification in the rotating frame, especially in the context of individual blade control. The technique was applied to the coupled flap-lag-inflow dynamics of a rigid blade excited by an active pitch link. The linear time periodic technique results were compared with the linear time invariant technique results. Also, the effect of noise processes and initial parameter guess on the identification procedure were investigated. To study the effect of elastic modes, a rigid blade with a trailing edge flap excited by a smart actuator was selected and system parameters were successfully identified, but with some expense of computational storage and time. Conclusively, the linear time periodic technique substantially improved the identified parameter accuracy compared to the linear time invariant technique. Also, the linear time periodic technique was robust to noises and initial guess of parameters. However, an elastic mode of higher frequency relative to the system pumping frequency tends to increase the computer storage requirement and computing time.

  20. Brain plasticity and functionality explored by nonlinear optical microscopy

    NASA Astrophysics Data System (ADS)

    Sacconi, L.; Allegra, L.; Buffelli, M.; Cesare, P.; D'Angelo, E.; Gandolfi, D.; Grasselli, G.; Lotti, J.; Mapelli, J.; Strata, P.; Pavone, F. S.

    2010-02-01

    In combination with fluorescent protein (XFP) expression techniques, two-photon microscopy has become an indispensable tool to image cortical plasticity in living mice. In parallel to its application in imaging, multi-photon absorption has also been used as a tool for the dissection of single neurites with submicrometric precision without causing any visible collateral damage to the surrounding neuronal structures. In this work, multi-photon nanosurgery is applied to dissect single climbing fibers expressing GFP in the cerebellar cortex. The morphological consequences are then characterized with time lapse 3-dimensional two-photon imaging over a period of minutes to days after the procedure. Preliminary investigations show that the laser induced fiber dissection recalls a regenerative process in the fiber itself over a period of days. These results show the possibility of this innovative technique to investigate regenerative processes in adult brain. In parallel with imaging and manipulation technique, non-linear microscopy offers the opportunity to optically record electrical activity in intact neuronal networks. In this work, we combined the advantages of second-harmonic generation (SHG) with a random access (RA) excitation scheme to realize a new microscope (RASH) capable of optically recording fast membrane potential events occurring in a wide-field of view. The RASH microscope, in combination with bulk loading of tissue with FM4-64 dye, was used to simultaneously record electrical activity from clusters of Purkinje cells in acute cerebellar slices. Complex spikes, both synchronous and asynchronous, were optically recorded simultaneously across a given population of neurons. Spontaneous electrical activity was also monitored simultaneously in pairs of neurons, where action potentials were recorded without averaging across trials. These results show the strength of this technique in describing the temporal dynamics of neuronal assemblies, opening promising perspectives in understanding the computations of neuronal networks.

  1. [A novel quantitative approach to study dynamic anaerobic process at micro scale].

    PubMed

    Zhang, Zhong-Liang; Wu, Jing; Jiang, Jian-Kai; Jiang, Jie; Li, Huai-Zhi

    2012-11-01

    Anaerobic digestion is attracting more and more interests because of its advantages such as low cost and recovery of clean energy etc. In order to overcome the drawbacks of the existed methods to study the dynamic anaerobic process, a novel microscopical quantitative approach at the granule level was developed combining both the microdevice and the quantitative image analysis techniques. This experiment displayed the process and characteristics of the gas production at static state for the first time and the results indicated that the method was of satisfactory repeatability. The gas production process at static state could be divided into three stages including rapid linear increasing stage, decelerated increasing stage and slow linear increasing stage. The rapid linear increasing stage was long and the biogas rate was high under high initial organic loading rate. The results showed that it was feasible to make the anaerobic process to be carried out in the microdevice; furthermore this novel method was reliable and could clearly display the dynamic process of the anaerobic reaction at the micro scale. The results are helpful to understand the anaerobic process.

  2. Multidimensional custom-made non-linear microscope: from ex-vivo to in-vivo imaging

    NASA Astrophysics Data System (ADS)

    Cicchi, R.; Sacconi, L.; Jasaitis, A.; O'Connor, R. P.; Massi, D.; Sestini, S.; de Giorgi, V.; Lotti, T.; Pavone, F. S.

    2008-09-01

    We have built a custom-made multidimensional non-linear microscope equipped with a combination of several non-linear laser imaging techniques involving fluorescence lifetime, multispectral two-photon and second-harmonic generation imaging. The optical system was mounted on a vertical honeycomb breadboard in an upright configuration, using two galvo-mirrors relayed by two spherical mirrors as scanners. A double detection system working in non-descanning mode has allowed both photon counting and a proportional regime. This experimental setup offering high spatial (micrometric) and temporal (sub-nanosecond) resolution has been used to image both ex-vivo and in-vivo biological samples, including cells, tissues, and living animals. Multidimensional imaging was used to spectroscopically characterize human skin lesions, as malignant melanoma and naevi. Moreover, two-color detection of two photon excited fluorescence was applied to in-vivo imaging of living mice intact neocortex, as well as to induce neuronal microlesions by femtosecond laser burning. The presented applications demonstrate the capability of the instrument to be used in a wide range of biological and biomedical studies.

  3. On the Detectability of Acoustic Waves Induced Following Irradiation by a Radiotherapy Linear Accelerator.

    PubMed

    Hickling, Susannah; Leger, Pierre; El Naqa, Issam

    2016-02-11

    Irradiating an object with a megavoltage photon beam generated by a clinical radiotherapy linear accelerator (linac) induces acoustic waves through the photoacoustic effect. The detection and characterization of such acoustic waves has potential applications in radiation therapy dosimetry. The purpose of this work was to gain insight into the properties of such acoustic waves by simulating and experimentally detecting them in a well-defined system consisting of a metal block suspended in a water tank. A novel simulation workflow was developed by combining radiotherapy Monte Carlo and acoustic wave transport simulation techniques. Different set-up parameters such as photon beam energy, metal block depth, metal block width, and metal block material were varied, and the simulated and experimental acoustic waveforms showed the same relative amplitude trends and frequency variations for such setup changes. The simulation platform developed in this work can easily be extended to other irradiation situations, and will be an invaluable tool for developing a radiotherapy dosimetry system based on the detection of the acoustic waves induced following linear accelerator irradiation.

  4. Practical somewhat-secure quantum somewhat-homomorphic encryption with coherent states

    NASA Astrophysics Data System (ADS)

    Tan, Si-Hui; Ouyang, Yingkai; Rohde, Peter P.

    2018-04-01

    We present a scheme for implementing homomorphic encryption on coherent states encoded using phase-shift keys. The encryption operations require only rotations in phase space, which commute with computations in the code space performed via passive linear optics, and with generalized nonlinear phase operations that are polynomials of the photon-number operator in the code space. This encoding scheme can thus be applied to any computation with coherent-state inputs, and the computation proceeds via a combination of passive linear optics and generalized nonlinear phase operations. An example of such a computation is matrix multiplication, whereby a vector representing coherent-state amplitudes is multiplied by a matrix representing a linear optics network, yielding a new vector of coherent-state amplitudes. By finding an orthogonal partitioning of the support of our encoded states, we quantify the security of our scheme via the indistinguishability of the encrypted code words. While we focus on coherent-state encodings, we expect that this phase-key encoding technique could apply to any continuous-variable computation scheme where the phase-shift operator commutes with the computation.

  5. The paddle move commonly used in magic tricks as a means for analysing the perceptual limits of combined motion trajectories.

    PubMed

    Hergovich, Andreas; Gröbl, Kristian; Carbon, Claus-Christian

    2011-01-01

    Following Gustav Kuhn's inspiring technique of using magicians' acts as a source of insight into cognitive sciences, we used the 'paddle move' for testing the psychophysics of combined movement trajectories. The paddle move is a standard technique in magic consisting of a combined rotating and tilting movement. Careful control of the mutual speed parameters of the two movements makes it possible to inhibit the perception of the rotation, letting the 'magic' effect emerge--a sudden change of the tilted object. By using 3-D animated computer graphics we analysed the interaction of different angular speeds and the object shape/size parameters in evoking this motion disappearance effect. An angular speed of 540 degrees s(-1) (1.5 rev. s(-1)) sufficed to inhibit the perception of the rotary movement with the smallest object showing the strongest effect. 90.7% of the 172 participants were not able to perceive the rotary movement at an angular speed of 1125 degrees s(-1) (3.125 rev. s(-1)). Further analysis by multiple linear regression revealed major influences on the effectiveness of the magic trick of object height and object area, demonstrating the applicability of analysing key factors of magic tricks to reveal limits of the perceptual system.

  6. Aerosol Size Distributions During ACE-Asia: Retrievals From Optical Thickness and Comparisons With In-situ Measurements

    NASA Astrophysics Data System (ADS)

    Kuzmanoski, M.; Box, M.; Box, G. P.; Schmidt, B.; Russell, P. B.; Redemann, J.; Livingston, J. M.; Wang, J.; Flagan, R. C.; Seinfeld, J. H.

    2002-12-01

    As part of the ACE-Asia experiment, conducted off the coast of China, Korea and Japan in spring 2001, measurements of aerosol physical, chemical and radiative characteristics were performed aboard the Twin Otter aircraft. Of particular importance for this paper were spectral measurements of aerosol optical thickness obtained at 13 discrete wavelengths, within 354-1558 nm wavelength range, using the AATS-14 sunphotometer. Spectral aerosol optical thickness can be used to obtain information about particle size distribution. In this paper, we use sunphotometer measurements to retrieve size distribution of aerosols during ACE-Asia. We focus on four cases in which layers influenced by different air masses were identified. Aerosol optical thickness of each layer was inverted using two different techniques - constrained linear inversion and multimodal. In the constrained linear inversion algorithm no assumption about the mathematical form of the distribution to be retrieved is made. Conversely, the multimodal technique assumes that aerosol size distribution is represented as a linear combination of few lognormal modes with predefined values of mode radii and geometric standard deviations. Amplitudes of modes are varied to obtain best fit of sum of optical thicknesses due to individual modes to sunphotometer measurements. In this paper we compare the results of these two retrieval methods. In addition, we present comparisons of retrieved size distributions with in situ measurements taken using an aerodynamic particle sizer and differential mobility analyzer system aboard the Twin Otter aircraft.

  7. A study of data analysis techniques for the multi-needle Langmuir probe

    NASA Astrophysics Data System (ADS)

    Hoang, H.; Røed, K.; Bekkeng, T. A.; Moen, J. I.; Spicher, A.; Clausen, L. B. N.; Miloch, W. J.; Trondsen, E.; Pedersen, A.

    2018-06-01

    In this paper we evaluate two data analysis techniques for the multi-needle Langmuir probe (m-NLP). The instrument uses several cylindrical Langmuir probes, which are positively biased with respect to the plasma potential in order to operate in the electron saturation region. Since the currents collected by these probes can be sampled at kilohertz rates, the instrument is capable of resolving the ionospheric plasma structure down to the meter scale. The two data analysis techniques, a linear fit and a non-linear least squares fit, are discussed in detail using data from the Investigation of Cusp Irregularities 2 sounding rocket. It is shown that each technique has pros and cons with respect to the m-NLP implementation. Even though the linear fitting technique seems to be better than measurements from incoherent scatter radar and in situ instruments, m-NLPs can be longer and can be cleaned during operation to improve instrument performance. The non-linear least squares fitting technique would be more reliable provided that a higher number of probes are deployed.

  8. MIBPB: a software package for electrostatic analysis.

    PubMed

    Chen, Duan; Chen, Zhan; Chen, Changjun; Geng, Weihua; Wei, Guo-Wei

    2011-03-01

    The Poisson-Boltzmann equation (PBE) is an established model for the electrostatic analysis of biomolecules. The development of advanced computational techniques for the solution of the PBE has been an important topic in the past two decades. This article presents a matched interface and boundary (MIB)-based PBE software package, the MIBPB solver, for electrostatic analysis. The MIBPB has a unique feature that it is the first interface technique-based PBE solver that rigorously enforces the solution and flux continuity conditions at the dielectric interface between the biomolecule and the solvent. For protein molecular surfaces, which may possess troublesome geometrical singularities, the MIB scheme makes the MIBPB by far the only existing PBE solver that is able to deliver the second-order convergence, that is, the accuracy increases four times when the mesh size is halved. The MIBPB method is also equipped with a Dirichlet-to-Neumann mapping technique that builds a Green's function approach to analytically resolve the singular charge distribution in biomolecules in order to obtain reliable solutions at meshes as coarse as 1 Å--whereas it usually takes other traditional PB solvers 0.25 Å to reach similar level of reliability. This work further accelerates the rate of convergence of linear equation systems resulting from the MIBPB by using the Krylov subspace (KS) techniques. Condition numbers of the MIBPB matrices are significantly reduced by using appropriate KS solver and preconditioner combinations. Both linear and nonlinear PBE solvers in the MIBPB package are tested by protein-solvent solvation energy calculations and analysis of salt effects on protein-protein binding energies, respectively. Copyright © 2010 Wiley Periodicals, Inc.

  9. Compressed air injection technique to standardize block injection pressures.

    PubMed

    Tsui, Ban C H; Li, Lisa X Y; Pillay, Jennifer J

    2006-11-01

    Presently, no standardized technique exists to monitor injection pressures during peripheral nerve blocks. Our objective was to determine if a compressed air injection technique, using an in vitro model based on Boyle's law and typical regional anesthesia equipment, could consistently maintain injection pressures below a 1293 mmHg level associated with clinically significant nerve injury. Injection pressures for 20 and 30 mL syringes with various needle sizes (18G, 20G, 21G, 22G, and 24G) were measured in a closed system. A set volume of air was aspirated into a saline-filled syringe and then compressed and maintained at various percentages while pressure was measured. The needle was inserted into the injection port of a pressure sensor, which had attached extension tubing with an injection plug clamped "off". Using linear regression with all data points, the pressure value and 99% confidence interval (CI) at 50% air compression was estimated. The linearity of Boyle's law was demonstrated with a high correlation, r = 0.99, and a slope of 0.984 (99% CI: 0.967-1.001). The net pressure generated at 50% compression was estimated as 744.8 mmHg, with the 99% CI between 729.6 and 760.0 mmHg. The various syringe/needle combinations had similar results. By creating and maintaining syringe air compression at 50% or less, injection pressures will be substantially below the 1293 mmHg threshold considered to be an associated risk factor for clinically significant nerve injury. This technique may allow simple, real-time and objective monitoring during local anesthetic injections while inherently reducing injection speed.

  10. MIBPB: A software package for electrostatic analysis

    PubMed Central

    Chen, Duan; Chen, Zhan; Chen, Changjun; Geng, Weihua; Wei, Guo-Wei

    2010-01-01

    The Poisson-Boltzmann equation (PBE) is an established model for the electrostatic analysis of biomolecules. The development of advanced computational techniques for the solution of the PBE has been an important topic in the past two decades. This paper presents a matched interface and boundary (MIB) based PBE software package, the MIBPB solver, for electrostatic analysis. The MIBPB has a unique feature that it is the first interface technique based PBE solver that rigorously enforces the solution and flux continuity conditions at the dielectric interface between the biomolecule and the solvent. For protein molecular surfaces which may possess troublesome geometrical singularities, the MIB scheme makes the MIBPB by far the only existing PBE solver that is able to deliver the second order convergence, i.e., the accuracy increases four times when the mesh size is halved. The MIBPB method is also equipped with a Dirichlet-to-Neumann mapping (DNM) technique, that builds a Green's function approach to analytically resolve the singular charge distribution in biomolecules in order to obtain reliable solutions at meshes as coarse as 1Å — while it usually takes other traditional PB solvers 0.25Å to reach similar level of reliability. The present work further accelerates the rate of convergence of linear equation systems resulting from the MIBPB by utilizing the Krylov subspace (KS) techniques. Condition numbers of the MIBPB matrices are significantly reduced by using appropriate Krylov subspace solver and preconditioner combinations. Both linear and nonlinear PBE solvers in the MIBPB package are tested by protein-solvent solvation energy calculations and analysis of salt effects on protein-protein binding energies, respectively. PMID:20845420

  11. Comparative Performance Evaluation of Rainfall-runoff Models, Six of Black-box Type and One of Conceptual Type, From The Galway Flow Forecasting System (gffs) Package, Applied On Two Irish Catchments

    NASA Astrophysics Data System (ADS)

    Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.

    The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.

  12. Topology Counts: Force Distributions in Circular Spring Networks.

    PubMed

    Heidemann, Knut M; Sageman-Furnas, Andrew O; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F; Wardetzky, Max

    2018-02-09

    Filamentous polymer networks govern the mechanical properties of many biological materials. Force distributions within these networks are typically highly inhomogeneous, and, although the importance of force distributions for structural properties is well recognized, they are far from being understood quantitatively. Using a combination of probabilistic and graph-theoretical techniques, we derive force distributions in a model system consisting of ensembles of random linear spring networks on a circle. We show that characteristic quantities, such as the mean and variance of the force supported by individual springs, can be derived explicitly in terms of only two parameters: (i) average connectivity and (ii) number of nodes. Our analysis shows that a classical mean-field approach fails to capture these characteristic quantities correctly. In contrast, we demonstrate that network topology is a crucial determinant of force distributions in an elastic spring network. Our results for 1D linear spring networks readily generalize to arbitrary dimensions.

  13. Analysis and application of ERTS-1 data for regional geological mapping

    NASA Technical Reports Server (NTRS)

    Gold, D. P.; Parizek, R. R.; Alexander, S. A.

    1973-01-01

    Combined visual and digital techniques of analysing ERTS-1 data for geologic information have been tried on selected areas in Pennsylvania. The major physiolographic and structural provinces show up well. Supervised mapping, following the imaged expression of known geologic features on ERTS band 5 enlargements (1:250,000) of parts of eastern Pennsylvania, delimited the Diabase Sills and the Precambrian rocks of the Reading Prong with remarkable accuracy. From unsupervised mapping, transgressive linear features are apparent in unexpected density, and exhibit strong control over river valley and stream channel directions. They are unaffected by bedrock type, age, or primary structural boundaries, which suggests they are either rejuvenated basement joint directions on different scales, or they are a recently impressed structure possibly associated with a drifting North American plate. With ground mapping and underflight data, 6 scales of linear features have been recognized.

  14. Landmark matching based retinal image alignment by enforcing sparsity in correspondence matrix.

    PubMed

    Zheng, Yuanjie; Daniel, Ebenezer; Hunter, Allan A; Xiao, Rui; Gao, Jianbin; Li, Hongsheng; Maguire, Maureen G; Brainard, David H; Gee, James C

    2014-08-01

    Retinal image alignment is fundamental to many applications in diagnosis of eye diseases. In this paper, we address the problem of landmark matching based retinal image alignment. We propose a novel landmark matching formulation by enforcing sparsity in the correspondence matrix and offer its solutions based on linear programming. The proposed formulation not only enables a joint estimation of the landmark correspondences and a predefined transformation model but also combines the benefits of the softassign strategy (Chui and Rangarajan, 2003) and the combinatorial optimization of linear programming. We also introduced a set of reinforced self-similarities descriptors which can better characterize local photometric and geometric properties of the retinal image. Theoretical analysis and experimental results with both fundus color images and angiogram images show the superior performances of our algorithms to several state-of-the-art techniques. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Evolution from MEMS-based Linear Drives to Bio-based Nano Drives

    NASA Astrophysics Data System (ADS)

    Fujita, Hiroyuki

    The successful extension of semiconductor technology to fabricate mechanical parts of the sizes from 10 to 100 micrometers opened wide ranges of possibilities for micromechanical devices and systems. The fabrication technique is called micromachining. Micromachining processes are based on silicon integrated circuits (IC) technology and used to build three-dimensional structures and movable parts by the combination of lithography, etching, film deposition, and wafer bonding. Microactuators are the key devices allowing MEMS to perform physical functions. Some of them are driven by electric, magnetic, and fluidic forces. Some others utilize actuator materials including piezoelectric (PZT, ZnO, quartz) and magnetostrictive materials (TbFe), shape memory alloy (TiNi) and bio molecular motors. This paper deals with the development of MEMS based microactuators, especially linear drives, following my own research experience. They include an electrostatic actuator, a superconductive levitated actuator, arrayed actuators, and a bio-motor-driven actuator.

  16. A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles

    NASA Technical Reports Server (NTRS)

    Eldred, C. H.; Gordon, S. V.

    1976-01-01

    A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.

  17. Exponential convergence through linear finite element discretization of stratified subdomains

    NASA Astrophysics Data System (ADS)

    Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali

    2016-10-01

    Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.

  18. Controllability of Free-piston Stirling Engine/linear Alternator Driving a Dynamic Load

    NASA Technical Reports Server (NTRS)

    Kankam, M. David; Rauch, Jeffrey S.

    1994-01-01

    This paper presents the dynamic behavior of a Free-Piston Stirling Engine/linear alternator (FPSE/LA) driving a single-phase fractional horse-power induction motor. The controllability and dynamic stability of the system are discussed by means of sensitivity effects of variations in system parameters, engine controller, operating conditions, and mechanical loading on the induction motor. The approach used expands on a combined mechanical and thermodynamic formulation employed in a previous paper. The application of state-space technique and frequency domain analysis enhances understanding of the dynamic interactions. Engine-alternator parametric sensitivity studies, similar to those of the previous paper, are summarized. Detailed discussions are provided for parametric variations which relate to the engine controller and system operating conditions. The results suggest that the controllability of a FPSE-based power system is enhanced by proper operating conditions and built-in controls.

  19. Principle component analysis and linear discriminant analysis of multi-spectral autofluorescence imaging data for differentiating basal cell carcinoma and healthy skin

    NASA Astrophysics Data System (ADS)

    Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Lesnichaya, Anastasiya D.; Kudrin, Konstantin G.; Cherkasova, Olga P.; Kurlov, Vladimir N.; Shikunova, Irina A.; Perchik, Alexei V.; Yurchenko, Stanislav O.; Reshetov, Igor V.

    2016-09-01

    In present paper, an ability to differentiate basal cell carcinoma (BCC) and healthy skin by combining multi-spectral autofluorescence imaging, principle component analysis (PCA), and linear discriminant analysis (LDA) has been demonstrated. For this purpose, the experimental setup, which includes excitation and detection branches, has been assembled. The excitation branch utilizes a mercury arc lamp equipped with a 365-nm narrow-linewidth excitation filter, a beam homogenizer, and a mechanical chopper. The detection branch employs a set of bandpass filters with the central wavelength of spectral transparency of λ = 400, 450, 500, and 550 nm, and a digital camera. The setup has been used to study three samples of freshly excised BCC. PCA and LDA have been implemented to analyze the data of multi-spectral fluorescence imaging. Observed results of this pilot study highlight the advantages of proposed imaging technique for skin cancer diagnosis.

  20. Integral Sliding Mode Fault-Tolerant Control for Uncertain Linear Systems Over Networks With Signals Quantization.

    PubMed

    Hao, Li-Ying; Park, Ju H; Ye, Dan

    2017-09-01

    In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.

  1. Aerodynamic coefficient identification package dynamic data accuracy determinations: Lessons learned

    NASA Technical Reports Server (NTRS)

    Heck, M. L.; Findlay, J. T.; Compton, H. R.

    1983-01-01

    The errors in the dynamic data output from the Aerodynamic Coefficient Identification Packages (ACIP) flown on Shuttle flights 1, 3, 4, and 5 were determined using the output from the Inertial Measurement Units (IMU). A weighted least-squares batch algorithm was empolyed. Using an averaging technique, signal detection was enhanced; this allowed improved calibration solutions. Global errors as large as 0.04 deg/sec for the ACIP gyros, 30 mg for linear accelerometers, and 0.5 deg/sec squared in the angular accelerometer channels were detected and removed with a combination is bias, scale factor, misalignment, and g-sensitive calibration constants. No attempt was made to minimize local ACIP dynamic data deviations representing sensed high-frequency vibration or instrument noise. Resulting 1sigma calibrated ACIP global accuracies were within 0.003 eg/sec, 1.0 mg, and 0.05 deg/sec squared for the gyros, linear accelerometers, and angular accelerometers, respectively.

  2. Deceleration, precooling, and multi-pass stopping of highly charged ions in Be{sup +} Coulomb crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmöger, L., E-mail: lisa.schmoeger@mpi-hd.mpg.de; Schwarz, M.; Versolato, O. O.

    2015-10-15

    Preparing highly charged ions (HCIs) in a cold and strongly localized state is of particular interest for frequency metrology and tests of possible spatial and temporal variations of the fine structure constant. Our versatile preparation technique is based on the generic modular combination of a pulsed ion source with a cryogenic linear Paul trap. Both instruments are connected by a compact beamline with deceleration and precooling properties. We present its design and commissioning experiments regarding these two functionalities. A pulsed buncher tube allows for the deceleration and longitudinal phase-space compression of the ion pulses. External injection of slow HCIs, specificallymore » Ar{sup 13+}, into the linear Paul trap and their subsequent retrapping in the absence of sympathetic cooling is demonstrated. The latter proved to be a necessary prerequisite for the multi-pass stopping of HCIs in continuously laser-cooled Be{sup +} Coulomb crystals.« less

  3. A novel method for producing low cost dynamometric wheels based on harmonic elimination techniques

    NASA Astrophysics Data System (ADS)

    Gutiérrez-López, María D.; García de Jalón, Javier; Cubillo, Adrián

    2015-02-01

    A method for producing low cost dynamometric wheels is presented in this paper. For carrying out this method, the metallic part of a commercial wheel is instrumented with strain gauges, which must be grouped in at least three circumferences and in equidistant radial lines. The strain signals of the same circumference are linearly combined to obtain at least two new signals that only depend on the tyre/road contact forces and moments. The influence of factors like the angle rotated by the wheel, the temperature or the centrifugal forces is eliminated in them by removing the continuous component and the largest possible number of harmonics, except the first or the second one, of the strain signals. The contact forces and moments are obtained from these new signals by solving two systems of linear equations with three unknowns each. This method is validated with some theoretical and experimental examples.

  4. Temporal and spectral imaging with micro-CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, Samuel M.; Johnson, G. Allan; Badea, Cristian T.

    2012-08-15

    Purpose: Micro-CT is widely used for small animal imaging in preclinical studies of cardiopulmonary disease, but further development is needed to improve spatial resolution, temporal resolution, and material contrast. We present a technique for visualizing the changing distribution of iodine in the cardiac cycle with dual source micro-CT. Methods: The approach entails a retrospectively gated dual energy scan with optimized filters and voltages, and a series of computational operations to reconstruct the data. Projection interpolation and five-dimensional bilateral filtration (three spatial dimensions + time + energy) are used to reduce noise and artifacts associated with retrospective gating. We reconstruct separatemore » volumes corresponding to different cardiac phases and apply a linear transformation to decompose these volumes into components representing concentrations of water and iodine. Since the resulting material images are still compromised by noise, we improve their quality in an iterative process that minimizes the discrepancy between the original acquired projections and the projections predicted by the reconstructed volumes. The values in the voxels of each of the reconstructed volumes represent the coefficients of linear combinations of basis functions over time and energy. We have implemented the reconstruction algorithm on a graphics processing unit (GPU) with CUDA. We tested the utility of the technique in simulations and applied the technique in an in vivo scan of a C57BL/6 mouse injected with blood pool contrast agent at a dose of 0.01 ml/g body weight. Postreconstruction, at each cardiac phase in the iodine images, we segmented the left ventricle and computed its volume. Using the maximum and minimum volumes in the left ventricle, we calculated the stroke volume, the ejection fraction, and the cardiac output. Results: Our proposed method produces five-dimensional volumetric images that distinguish different materials at different points in time, and can be used to segment regions containing iodinated blood and compute measures of cardiac function. Conclusions: We believe this combined spectral and temporal imaging technique will be useful for future studies of cardiopulmonary disease in small animals.« less

  5. Observational calibration of the projection factor of Cepheids. I. The type II Cepheid κ Pavonis

    NASA Astrophysics Data System (ADS)

    Breitfelder, J.; Kervella, P.; Mérand, A.; Gallenne, A.; Szabados, L.; Anderson, R. I.; Willson, M.; Le Bouquin, J.-B.

    2015-04-01

    Context. The distance of pulsating stars, in particular Cepheids, are commonly measured using the parallax of pulsation technique. The different versions of this technique combine measurements of the linear diameter variation (from spectroscopy) and the angular diameter variation (from photometry or interferometry) amplitudes, to retrieve the distance in a quasi-geometrical way. However, the linear diameter amplitude is directly proportional to the projection factor (hereafter p-factor), which is used to convert spectroscopic radial velocities (i.e., disk integrated) into pulsating (i.e., photospheric) velocities. The value of the p-factor and its possible dependence on the pulsation period are still widely debated. Aims: Our goal is to measure an observational value of the p-factor of the type-II Cepheid κ Pavonis. Methods: The parallax of the type-II Cepheid κ Pav was measured with an accuracy of 5% using HST/FGS. We used this parallax as a starting point to derive the p-factor of κ Pav, using the SPIPS technique (Spectro-Photo-Interferometry of Pulsating Stars), which is a robust version of the parallax-of-pulsation method that employs radial velocity, interferometric and photometric data. We applied this technique to a combination of new VLTI/PIONIER optical interferometric angular diameters, new CORALIE and HARPS radial velocities, as well as multi-colour photometry and radial velocities from the literature. Results: We obtain a value of p = 1.26 ± 0.07 for the p-factor of κ Pav. This result agrees with several of the recently derived Period-p-factor relationships from the literature, as well as previous observational determinations for Cepheids. Conclusions: Individual estimates of the p-factor are fundamental to calibrating the parallax of pulsation distances of Cepheids. Together with previous observational estimates, the projection factor we obtain points to a weak dependence of the p-factor on period. Based on observations realized with ESO facilities at Paranal Observatory under program IDs 091.D-0020 and 093.D-0316.Based on observations collected at ESO La Silla Observatory using the Coralie spectrograph mounted to the Swiss 1.2 m Euler telescope, under program CNTAC2014A-5.

  6. Problem Based Learning Technique and Its Effect on Acquisition of Linear Programming Skills by Secondary School Students in Kenya

    ERIC Educational Resources Information Center

    Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice

    2015-01-01

    The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…

  7. Optimization of a constrained linear monochromator design for neutral atom beams.

    PubMed

    Kaltenbacher, Thomas

    2016-04-01

    A focused ground state, neutral atom beam, exploiting its de Broglie wavelength by means of atom optics, is used for neutral atom microscopy imaging. Employing Fresnel zone plates as a lens for these beams is a well established microscopy technique. To date, even for favorable beam source conditions a minimal focus spot size of slightly below 1μm was reached. This limitation is essentially given by the intrinsic spectral purity of the beam in combination with the chromatic aberration of the diffraction based zone plate. Therefore, it is important to enhance the monochromaticity of the beam, enabling a higher spatial resolution, preferably below 100nm. We propose to increase the monochromaticity of a neutral atom beam by means of a so-called linear monochromator set-up - a Fresnel zone plate in combination with a pinhole aperture - in order to gain more than one order of magnitude in spatial resolution. This configuration is known in X-ray microscopy and has proven to be useful, but has not been applied to neutral atom beams. The main result of this work is optimal design parameters based on models for this linear monochromator set-up followed by a second zone plate for focusing. The optimization was performed for minimizing the focal spot size and maximizing the centre line intensity at the detector position for an atom beam simultaneously. The results presented in this work are for, but not limited to, a neutral helium atom beam. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Non-linear analysis of wave progagation using transform methods and plates and shells using integral equations

    NASA Astrophysics Data System (ADS)

    Pipkins, Daniel Scott

    Two diverse topics of relevance in modern computational mechanics are treated. The first involves the modeling of linear and non-linear wave propagation in flexible, lattice structures. The technique used combines the Laplace Transform with the Finite Element Method (FEM). The procedure is to transform the governing differential equations and boundary conditions into the transform domain where the FEM formulation is carried out. For linear problems, the transformed differential equations can be solved exactly, hence the method is exact. As a result, each member of the lattice structure is modeled using only one element. In the non-linear problem, the method is no longer exact. The approximation introduced is a spatial discretization of the transformed non-linear terms. The non-linear terms are represented in the transform domain by making use of the complex convolution theorem. A weak formulation of the resulting transformed non-linear equations yields a set of element level matrix equations. The trial and test functions used in the weak formulation correspond to the exact solution of the linear part of the transformed governing differential equation. Numerical results are presented for both linear and non-linear systems. The linear systems modeled are longitudinal and torsional rods and Bernoulli-Euler and Timoshenko beams. For non-linear systems, a viscoelastic rod and Von Karman type beam are modeled. The second topic is the analysis of plates and shallow shells under-going finite deflections by the Field/Boundary Element Method. Numerical results are presented for two plate problems. The first is the bifurcation problem associated with a square plate having free boundaries which is loaded by four, self equilibrating corner forces. The results are compared to two existing numerical solutions of the problem which differ substantially.

  9. Channeled polarimetric technique for the measurement of spectral dependence of linearly Stokes parameters

    NASA Astrophysics Data System (ADS)

    Quan, Naicheng; Zhang, Chunmin; Mu, Tingkui; Li, Qiwei

    2018-05-01

    The principle and experimental demonstration of a method based on channeled polarimetric technique (CPT) to measure spectrally resolved linearly Stokes parameters (SRLS) is presented. By replacing front retarder with an achromatic quarter wave-plate of CPT, the linearly SRLS can be measured simultaneously. It also retains the advantages of static and compact of CPT. Besides, comparing with CPT, it can reduce the RMS error by nearly a factor of 2-5 for the individual linear Stokes parameters.

  10. Quantifying and visualizing variations in sets of images using continuous linear optimal transport

    NASA Astrophysics Data System (ADS)

    Kolouri, Soheil; Rohde, Gustavo K.

    2014-03-01

    Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.

  11. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  12. Using Strassen's algorithm to accelerate the solution of linear systems

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Lee, King; Simon, Horst D.

    1990-01-01

    Strassen's algorithm for fast matrix-matrix multiplication has been implemented for matrices of arbitrary shapes on the CRAY-2 and CRAY Y-MP supercomputers. Several techniques have been used to reduce the scratch space requirement for this algorithm while simultaneously preserving a high level of performance. When the resulting Strassen-based matrix multiply routine is combined with some routines from the new LAPACK library, LU decomposition can be performed with rates significantly higher than those achieved by conventional means. We succeeded in factoring a 2048 x 2048 matrix on the CRAY Y-MP at a rate equivalent to 325 MFLOPS.

  13. Efficient Computation Of Behavior Of Aircraft Tires

    NASA Technical Reports Server (NTRS)

    Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.

    1989-01-01

    NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.

  14. How to make Raman-inactive helium visible in Raman spectra of tritium-helium gas mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schloesser, M.; Pakari, O.; Rupp, S.

    2015-03-15

    Raman spectroscopy, a powerful method for the quantitative compositional analysis of molecular gases, e.g. mixtures of hydrogen isotopologues, is not able to detect monoatomic species like helium. This deficit can be overcome by using radioluminescence emission from helium atoms induced by β-electrons from tritium decay. We present theoretical considerations and combined Raman/radioluminescence spectra. Furthermore, we discuss the linearity of the method together with validation measurements for determining the pressure dependence. Finally, we conclude how this technique can be used for samples of helium with traces of tritium, and vice versa. (authors)

  15. Stochastic series expansion simulation of the t -V model

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Ye-Hua; Troyer, Matthias

    2016-04-01

    We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.

  16. Multipath calibration in GPS pseudorange measurements

    NASA Technical Reports Server (NTRS)

    Kee, Changdon (Inventor); Parkinson, Bradford W. (Inventor)

    1998-01-01

    Novel techniques are disclosed for eliminating multipath errors, including mean bias errors, in pseudorange measurements made by conventional global positioning system receivers. By correlating the multipath signals of different satellites at their cross-over points in the sky, multipath mean bias errors are effectively eliminated. By then taking advantage of the geometrical dependence of multipath, a linear combination of spherical harmonics are fit to the satellite multipath data to create a hemispherical model of the multipath. This calibration model can then be used to compensate for multipath in subsequent measurements and thereby obtain GPS positioning to centimeter accuracy.

  17. Restoring Low Sidelobe Antenna Patterns with Failed Elements in a Phased Array Antenna

    DTIC Science & Technology

    2016-02-01

    optimum low sidelobes are demonstrated in several examples. Index Terms — Array signal processing, beams, linear algebra , phased arrays, shaped...represented by a linear combination of low sidelobe beamformers with no failed elements, ’s, in a neighborhood around under the constraint that the linear ...would expect that linear combinations of them in a neighborhood around would also have low sidelobes. The algorithms in this paper exploit this

  18. Multi-image encryption based on synchronization of chaotic lasers and iris authentication

    NASA Astrophysics Data System (ADS)

    Banerjee, Santo; Mukhopadhyay, Sumona; Rondoni, Lamberto

    2012-07-01

    A new technique of transmitting encrypted combinations of gray scaled and chromatic images using chaotic lasers derived from Maxwell-Bloch's equations has been proposed. This novel scheme utilizes the general method of solution of a set of linear equations to transmit similar sized heterogeneous images which are a combination of monochrome and chromatic images. The chaos encrypted gray scaled images are concatenated along the three color planes resulting in color images. These are then transmitted over a secure channel along with a cover image which is an iris scan. The entire cryptology is augmented with an iris-based authentication scheme. The secret messages are retrieved once the authentication is successful. The objective of our work is briefly outlined as (a) the biometric information is the iris which is encrypted before transmission, (b) the iris is used for personal identification and verifying for message integrity, (c) the information is transmitted securely which are colored images resulting from a combination of gray images, (d) each of the images transmitted are encrypted through chaos based cryptography, (e) these encrypted multiple images are then coupled with the iris through linear combination of images before being communicated over the network. The several layers of encryption together with the ergodicity and randomness of chaos render enough confusion and diffusion properties which guarantee a fool-proof approach in achieving secure communication as demonstrated by exhaustive statistical methods. The result is vital from the perspective of opening a fundamental new dimension in multiplexing and simultaneous transmission of several monochromatic and chromatic images along with biometry based authentication and cryptography.

  19. In Situ Detection of Anaplasma spp. by DNA Target-Primed Rolling-Circle Amplification of a Padlock Probe and Intracellular Colocalization with Immunofluorescently Labeled Host Cell von Willebrand Factor ▿

    PubMed Central

    Wamsley, Heather L.; Barbet, Anthony F.

    2008-01-01

    Endothelial cell culture and preliminary immunofluorescent staining of Anaplasma-infected tissues suggest that endothelial cells may be an in vivo nidus of mammalian infection. To investigate endothelial cells and other potentially cryptic sites of Anaplasma sp. infection in mammalian tissues, a sensitive and specific isothermal in situ technique to detect localized Anaplasma gene sequences by using rolling-circle amplification of circularizable, linear, oligonucleotide probes (padlock probes) was developed. Cytospin preparations of uninfected or Anaplasma-infected cell cultures were examined using this technique. Via fluorescence microscopy, the technique described here, and a combination of differential interference contrast microscopy and von Willebrand factor immunofluorescence, Anaplasma phagocytophilum and Anaplasma marginale were successfully localized in situ within intact cultured mammalian cells. This work represents the first application of this in situ method for the detection of a microorganism and forms the foundation for future applications of this technique to detect, localize, and analyze Anaplasma nucleotide sequences in the tissues of infected mammalian and arthropod hosts and in cell cultures. PMID:18495855

  20. Application of multivariable search techniques to structural design optimization

    NASA Technical Reports Server (NTRS)

    Jones, R. T.; Hague, D. S.

    1972-01-01

    Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.

  1. Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Boyle, Richard D.

    2014-01-01

    Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.

  2. Linear combination methods to improve diagnostic/prognostic accuracy on future observations

    PubMed Central

    Kang, Le; Liu, Aiyi; Tian, Lili

    2014-01-01

    Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714

  3. Classification of sodium MRI data of cartilage using machine learning.

    PubMed

    Madelin, Guillaume; Poidevin, Frederick; Makrymallis, Antonios; Regatte, Ravinder R

    2015-11-01

    To assess the possible utility of machine learning for classifying subjects with and subjects without osteoarthritis using sodium magnetic resonance imaging data. Theory: Support vector machine, k-nearest neighbors, naïve Bayes, discriminant analysis, linear regression, logistic regression, neural networks, decision tree, and tree bagging were tested. Sodium magnetic resonance imaging with and without fluid suppression by inversion recovery was acquired on the knee cartilage of 19 controls and 28 osteoarthritis patients. Sodium concentrations were measured in regions of interests in the knee for both acquisitions. Mean (MEAN) and standard deviation (STD) of these concentrations were measured in each regions of interest, and the minimum, maximum, and mean of these two measurements were calculated over all regions of interests for each subject. The resulting 12 variables per subject were used as predictors for classification. Either Min [STD] alone, or in combination with Mean [MEAN] or Min [MEAN], all from fluid suppressed data, were the best predictors with an accuracy >74%, mainly with linear logistic regression and linear support vector machine. Other good classifiers include discriminant analysis, linear regression, and naïve Bayes. Machine learning is a promising technique for classifying osteoarthritis patients and controls from sodium magnetic resonance imaging data. © 2014 Wiley Periodicals, Inc.

  4. Graphical and PC-software analysis of volcano eruption precursors according to the Materials Failure Forecast Method (FFM)

    NASA Astrophysics Data System (ADS)

    Cornelius, Reinold R.; Voight, Barry

    1995-03-01

    The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.

  5. Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks

    PubMed Central

    Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan

    2017-01-01

    Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H2RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H2RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller. PMID:28672856

  6. Using absolute gravimeter data to determine vertical gravity gradients

    USGS Publications Warehouse

    Robertson, D.S.

    2001-01-01

    The position versus time data from a free-fall absolute gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" errors of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.

  7. Chemometric techniques in distribution, characterisation and source apportionment of polycyclic aromatic hydrocarbons (PAHS) in aquaculture sediments in Malaysia.

    PubMed

    Retnam, Ananthy; Zakaria, Mohamad Pauzi; Juahir, Hafizan; Aris, Ahmad Zaharin; Zali, Munirah Abdul; Kasim, Mohd Fadhil

    2013-04-15

    This study investigated polycyclic aromatic hydrocarbons (PAHs) pollution in surface sediments within aquaculture areas in Peninsular Malaysia using chemometric techniques, forensics and univariate methods. The samples were analysed using soxhlet extraction, silica gel column clean-up and gas chromatography mass spectrometry. The total PAH concentrations ranged from 20 to 1841 ng/g with a mean of 363 ng/g dw. The application of chemometric techniques enabled clustering and discrimination of the aquaculture sediments into four groups according to the contamination levels. A combination of chemometric and molecular indices was used to identify the sources of PAHs, which could be attributed to vehicle emissions, oil combustion and biomass combustion. Source apportionment using absolute principle component scores-multiple linear regression showed that the main sources of PAHs are vehicle emissions 54%, oil 37% and biomass combustion 9%. Land-based pollution from vehicle emissions is the predominant contributor of PAHs in the aquaculture sediments of Peninsular Malaysia. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Spectral solver for multi-scale plasma physics simulations with dynamically adaptive number of moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec

    2015-06-01

    A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instabilitymore » test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.« less

  9. On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland W.

    1992-01-01

    The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed, which aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, we investigate the use of Bernstein-Szego weights.

  10. The 3D modeling of high numerical aperture imaging in thin films

    NASA Technical Reports Server (NTRS)

    Flagello, D. G.; Milster, Tom

    1992-01-01

    A modelling technique is described which is used to explore three dimensional (3D) image irradiance distributions formed by high numerical aperture (NA is greater than 0.5) lenses in homogeneous, linear films. This work uses a 3D modelling approach that is based on a plane-wave decomposition in the exit pupil. Each plane wave component is weighted by factors due to polarization, aberration, and input amplitude and phase terms. This is combined with a modified thin-film matrix technique to derive the total field amplitude at each point in a film by a coherent vector sum over all plane waves. Then the total irradiance is calculated. The model is used to show how asymmetries present in the polarized image change with the influence of a thin film through varying degrees of focus.

  11. Electrochemical impedimetric sensor based on molecularly imprinted polymers/sol-gel chemistry for methidathion organophosphorous insecticide recognition.

    PubMed

    Bakas, Idriss; Hayat, Akhtar; Piletsky, Sergey; Piletska, Elena; Chehimi, Mohamed M; Noguer, Thierry; Rouillon, Régis

    2014-12-01

    We report here a novel method to detect methidathion organophosphorous insecticides. The sensing platform was architected by the combination of molecularly imprinted polymers and sol-gel technique on inexpensive, portable and disposable screen printed carbon electrodes. Electrochemical impedimetric detection technique was employed to perform the label free detection of the target analyte on the designed MIP/sol-gel integrated platform. The selection of the target specific monomer by electrochemical impedimetric methods was consistent with the results obtained by the computational modelling method. The prepared electrochemical MIP/sol-gel based sensor exhibited a high recognition capability toward methidathion, as well as a broad linear range and a low detection limit under the optimized conditions. Satisfactory results were also obtained for the methidathion determination in waste water samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    DOE PAGES

    Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...

    2015-12-10

    We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less

  13. Improved characterization of scenes with a combination of MMW radar and radiometer information

    NASA Astrophysics Data System (ADS)

    Dill, Stephan; Peichl, Markus; Schreiber, Eric; Anglberger, Harald

    2017-05-01

    For security related applications MMW radar and radiometer systems in remote sensing or stand-off configurations are well established techniques. The range of development stages extends from experimental to commercial systems on the civil and military market. Typical examples are systems for personnel screening at airports for concealed object detection under clothing, enhanced vision or landing aid for helicopter and vehicle based systems for suspicious object or IED detection along roads. Due to the physical principle of active (radar) and passive (radiometer) MMW measurement techniques the appearance of single objects and thus the complete scenario is rather different for radar and radiometer images. A reasonable combination of both measurement techniques could lead to enhanced object information. However, some technical requirements should be taken into account. The imaging geometry for both sensors should be nearly identical, the geometrical resolution and the wavelength should be similar and at best the imaging process should be carried out simultaneously. Therefore theoretical and experimental investigations on a suitable combination of MMW radar and radiometer information have been conducted. First experiments in 2016 have been done with an imaging linescanner based on a cylindrical imaging geometry [1]. It combines a horizontal line scan in azimuth with a linear motion in vertical direction for the second image dimension. The main drawback of the system is the limited number of pixel in vertical dimension at a certain distance. Nevertheless the near range imaging results where promising. Therefore the combination of radar and radiometer sensor was assembled on the DLR wide-field-of-view linescanner ABOSCA which is based on a spherical imaging geometry [2]. A comparison of both imaging systems is discussed. The investigations concentrate on rather basic scenarios with canonical targets like flat plates, spheres, corner reflectors and cylinders. First experimental measurement results with the ABOSCA linescanner are shown.

  14. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  15. Improved quantification of important beer quality parameters based on nonlinear calibration methods applied to FT-MIR spectra.

    PubMed

    Cernuda, Carlos; Lughofer, Edwin; Klein, Helmut; Forster, Clemens; Pawliczek, Marcin; Brandstetter, Markus

    2017-01-01

    During the production process of beer, it is of utmost importance to guarantee a high consistency of the beer quality. For instance, the bitterness is an essential quality parameter which has to be controlled within the specifications at the beginning of the production process in the unfermented beer (wort) as well as in final products such as beer and beer mix beverages. Nowadays, analytical techniques for quality control in beer production are mainly based on manual supervision, i.e., samples are taken from the process and analyzed in the laboratory. This typically requires significant lab technicians efforts for only a small fraction of samples to be analyzed, which leads to significant costs for beer breweries and companies. Fourier transform mid-infrared (FT-MIR) spectroscopy was used in combination with nonlinear multivariate calibration techniques to overcome (i) the time consuming off-line analyses in beer production and (ii) already known limitations of standard linear chemometric methods, like partial least squares (PLS), for important quality parameters Speers et al. (J I Brewing. 2003;109(3):229-235), Zhang et al. (J I Brewing. 2012;118(4):361-367) such as bitterness, citric acid, total acids, free amino nitrogen, final attenuation, or foam stability. The calibration models are established with enhanced nonlinear techniques based (i) on a new piece-wise linear version of PLS by employing fuzzy rules for local partitioning the latent variable space and (ii) on extensions of support vector regression variants (-PLSSVR and ν-PLSSVR), for overcoming high computation times in high-dimensional problems and time-intensive and inappropriate settings of the kernel parameters. Furthermore, we introduce a new model selection scheme based on bagged ensembles in order to improve robustness and thus predictive quality of the final models. The approaches are tested on real-world calibration data sets for wort and beer mix beverages, and successfully compared to linear methods, showing a clear out-performance in most cases and being able to meet the model quality requirements defined by the experts at the beer company. Figure Workflow for calibration of non-Linear model ensembles from FT-MIR spectra in beer production .

  16. [Recent advances of anastomosis techniques of esophagojejunostomy after laparoscopic totally gastrectomy in gastric tumor].

    PubMed

    Li, Xi; Ke, Chongwei

    2015-05-01

    The esophageal jejunum anastomosis of the digestive tract reconstruction techniques in laparoscopic total gastrectomy includes two categories: circular stapler anastomosis techniques and linear stapler anastomosis techniques. Circular stapler anastomosis techniques include manual anastomosis method, purse string instrument method, Hiki improved special anvil anastomosis technique, the transorally inserted anvil(OrVil(TM)) and reverse puncture device technique. Linear stapler anastomosis techniques include side to side anastomosis technique and Overlap side to side anastomosis technique. Esophageal jejunum anastomosis technique has a wide selection of different technologies with different strengths and the corresponding limitations. This article will introduce research progress of laparoscopic total gastrectomy esophagus jejunum anastomosis from both sides of the development of anastomosis technology and the selection of anastomosis technology.

  17. Linear and Order Statistics Combiners for Pattern Classification

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)

    2001-01-01

    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

  18. Application of a local linearization technique for the solution of a system of stiff differential equations associated with the simulation of a magnetic bearing assembly

    NASA Technical Reports Server (NTRS)

    Kibler, K. S.; Mcdaniel, G. A.

    1981-01-01

    A digital local linearization technique was used to solve a system of stiff differential equations which simulate a magnetic bearing assembly. The results prove the technique to be accurate, stable, and efficient when compared to a general purpose variable order Adams method with a stiff option.

  19. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  20. The ILRS Reanalysis 1983 - 2009 Contributed To ITRF2008

    NASA Astrophysics Data System (ADS)

    Pavlis, E. C.; Luceri, V.; Sciarretta, C.; Kelm, R.

    2009-12-01

    For over two decades, Satellite Laser Ranging (SLR) data contribute to the definition of the Terrestrial Reference Frame (TRF). Until the development of ITRF2000, the contributions were submitted in the form of a set of normal equations or a covariance matrix of station coordinates and their linear rates at a standard epoch. The development of ITRF2005 ushered a new era with the use of weekly or session contributions, allowing greater flexibility in the relative weighting and the combination of information from various techniques. Moreover, the need of a unique, official, representative solution for each Technique Service, based on the rigorous combination of the various Analysis Centers’ contributions, gave the opportunity to all techniques to verify, as a first step, the intra-technique solution consistency and, immediately after, to engage in discussions and comparison of the internal procedures, leading to a harmonization and validation of these procedures and the adopted models in the inter-technique context. In many occasions, the time series approach joint with the intra- and inter-technique comparison steps also highlighted differences that previously went unnoticed, and corrected incompatibilities. During the past year we have been preparing the ILRS contribution to a second TRF developed in the same way, the ITRF2008. The ILRS approach is based strictly on the current IERS Conventions 2003 and our internal standards. The Unified Analysis Workshop in 2007 stressed a number of areas where each technique needed to focus more attention in future analyses. In the case of SLR, the primary areas of concern were tracking station biases, extending the data span used in the analysis, and target characteristics. The present re-analysis extends from 1983 to 2009, covering a 25-year period, the longest for any of the contributing techniques; although the network and data quality for the 1983-1993 period are significantly poorer than for the latter years, the overall SLR contribution will reinforce the stability of the datum definition, especially in terms of origin and scale. Engineers and analysts have also worked closely over the past two years to determine station biases, rationalize them through correlation with engineering events at the stations, and validate them through analysis. A separate effort focused on developing accurate satellite target signatures for the primary targets contributing to the ITRF product (primarily LAGEOS 1 & 2). A detailed discussion of these works will be presented along with a description of the individual series contributing to the combination, examining their relative quality and temporal coverage, and the statistics of the combined products.

  1. The ILRS contribution to ITRF2008

    NASA Astrophysics Data System (ADS)

    Pavlis, E. C.; Luceri, V.; Sciarretta, C.; Kelm, R.

    2009-04-01

    Since over two decades, Satellite Laser Ranging (SLR) data contribute to the definition of the Terrestrial Reference Frame (TRF). Until the development of ITRF2000, the contributions were submitted in the form of a set of normal equations or a covariance matrix of station coordinates and their linear rates at a standard epoch. The development of ITRF2005 ushered a new era with the use of weekly or session contributions, allowing greater flexibility in the relative weighting and the combination of information from various techniques. Moreover, the need of a unique, official, representative solution for each Technique Service, based on the rigorous combination of the various Analysis Centers' contributions, gave the opportunity to all techniques to verify, as a first step, the intra-technique solution consistency and, immediately after, to engage in discussions and comparison of the internal procedures, leading to a harmonization and validation of these procedures and the adopted models in the inter-technique context. In many occasions, the time series approach joint with the intra- and inter-technique comparison steps also highlighted differences that previously went unnoticed, and corrected incompatibilities. During the past year we have been preparing the ILRS contribution to a second TRF developed in the same way, the ITRF2008. The ILRS approach is based strictly on the current IERS Conventions 2003 and our internal standards. The Unified Analysis Workshop in 2007 stressed a number of areas where each technique needed to focus more attention in future analyses. In the case of SLR, the primary areas of concern were tracking station biases, extending the data span used in the analysis, and target characteristics. The present re-analysis extends from 1983 to 2008, covering a 25-year period, the longest for any of the contributing techniques; although the network and data quality for the 1983-1993 period are significantly poorer than for the latter years, the overall SLR contribution will reinforce the stability of the datum definition, especially in terms of origin and scale. Engineers and analysts have also worked closely over the past two years to determine station biases, rationalize them through correlation with engineering events at the stations, and validate them through analysis. A separate effort focused on developing accurate satellite target signatures for the primary targets contributing to the ITRF product (primarily LAGEOS 1 & 2). A detailed discussion of these works will be presented in a separate presentation. Here, we will restrict our presentation to the description of the individual series contributing to the combination, examine their relative quality and temporal coverage, and statistics of the initial, preliminary combined products.

  2. A study of the use of linear programming techniques to improve the performance in design optimization problems

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.

  3. Gradient stationary phase optimized selectivity liquid chromatography with conventional columns.

    PubMed

    Chen, Kai; Lynen, Frédéric; Szucs, Roman; Hanna-Brown, Melissa; Sandra, Pat

    2013-05-21

    Stationary phase optimized selectivity liquid chromatography (SOSLC) is a promising technique to optimize the selectivity of a given separation. By combination of different stationary phases, SOSLC offers excellent possibilities for method development under both isocratic and gradient conditions. The so far available commercial SOSLC protocol utilizes dedicated column cartridges and corresponding cartridge holders to build up the combined column of different stationary phases. The present work is aimed at developing and extending the gradient SOSLC approach towards coupling conventional columns. Generic tubing was used to connect short commercially available LC columns. Fast and base-line separation of a mixture of 12 compounds containing phenones, benzoic acids and hydroxybenzoates under both isocratic and linear gradient conditions was selected to demonstrate the potential of SOSLC. The influence of the connecting tubing on the deviation of predictions is also discussed.

  4. Reduced order modeling of fluid/structure interaction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barone, Matthew Franklin; Kalashnikova, Irina; Segalman, Daniel Joseph

    2009-11-01

    This report describes work performed from October 2007 through September 2009 under the Sandia Laboratory Directed Research and Development project titled 'Reduced Order Modeling of Fluid/Structure Interaction.' This project addresses fundamental aspects of techniques for construction of predictive Reduced Order Models (ROMs). A ROM is defined as a model, derived from a sequence of high-fidelity simulations, that preserves the essential physics and predictive capability of the original simulations but at a much lower computational cost. Techniques are developed for construction of provably stable linear Galerkin projection ROMs for compressible fluid flow, including a method for enforcing boundary conditions that preservesmore » numerical stability. A convergence proof and error estimates are given for this class of ROM, and the method is demonstrated on a series of model problems. A reduced order method, based on the method of quadratic components, for solving the von Karman nonlinear plate equations is developed and tested. This method is applied to the problem of nonlinear limit cycle oscillations encountered when the plate interacts with an adjacent supersonic flow. A stability-preserving method for coupling the linear fluid ROM with the structural dynamics model for the elastic plate is constructed and tested. Methods for constructing efficient ROMs for nonlinear fluid equations are developed and tested on a one-dimensional convection-diffusion-reaction equation. These methods are combined with a symmetrization approach to construct a ROM technique for application to the compressible Navier-Stokes equations.« less

  5. Intracorporeal reconstruction after laparoscopic pylorus-preserving gastrectomy for middle-third early gastric cancer: a hybrid technique using linear stapler and manual suturing.

    PubMed

    Koeda, Keisuke; Chiba, Takehiro; Noda, Hironobu; Nishinari, Yutaka; Segawa, Takenori; Akiyama, Yuji; Iwaya, Takeshi; Nishizuka, Satoshi; Nitta, Hiroyuki; Otsuka, Koki; Sasaki, Akira

    2016-05-01

    Laparoscopy-assisted pylorus-preserving gastrectomy has been increasingly reported as a treatment for early gastric cancer located in the middle third of the stomach because of its low invasiveness and preservation of pyloric function. Advantages of a totally laparoscopic approach to distal gastrectomy, including small wound size, minimal invasiveness, and safe anastomosis, have been recently reported. Here, we introduce a new procedure for intracorporeal gastro-gastrostomy combined with totally laparoscopic pylorus-preserving gastrectomy (TLPPG). The stomach is transected after sufficient lymphadenectomy with preservation of infrapyloric vessels and vagal nerves. The proximal stomach is first transected near the Demel line, and the distal side is transected 4 to 5 cm from the pyloric ring. To create end-to-end gastro-gastrostomy, the posterior wall of the anastomosis is stapled with a linear stapler and the anterior wall is made by manual suturing intracorporeally. We retrospectively assessed the postoperative surgical outcomes via medical records. The primary endpoint in the present study is safety. Sixteen patients underwent TLPPG with intracorporeal reconstruction. All procedures were successfully performed without any intraoperative complications. The mean operative time was 275 min, with mean blood loss of 21 g. With the exception of one patient who had gastric stasis, 15 patients were discharged uneventfully between postoperative days 8 and 11. Our novel hybrid technique for totally intracorporeal end-to-end anastomosis was performed safely without mini-laparotomy. This technique requires prospective validation.

  6. Quantitative determination of copper in a glass matrix using double pulse laser induced breakdown and electron paramagnetic resonance spectroscopic techniques.

    PubMed

    Khalil, Ahmed A I; Morsy, Mohamed A

    2016-07-01

    A series of lithium-lead-borate glasses of a variable copper oxide loading were quantitatively analyzed in this work using two distinct spectroscopic techniques, namely double pulse laser induced breakdown spectroscopy (DP-LIBS) and electron paramagnetic resonance (EPR). DP-LIBS results measured upon a combined nanosecond lasers irradiation running at 266nm and 1064nm pulses of a collinear configuration directed to the surface of borate glass samples with a known composition. This arrangement was employed to predict the electron's temperature (Te) and density (Ne) of the excited plasma from the recorded spectra. The intensity of elements' responses using this scheme is higher than that of single-pulse laser induced breakdown spectroscopy (SP-LIBS) setup under the same experimental conditions. On the other hand, the EPR data shows typical Cu (II) EPR-signals in the borate glass system that is networked at a distorted tetragonal Borate-arrangement. The signal intensity of the Cu (II) peak at g⊥=2.0596 has been used to quantify the Cu-content accurately in the glass matrix. Both techniques produced linear calibration curves of Cu-metals in glasses with excellent linear regression coefficient (R(2)) values. This study establishes a good correlation between DP-LIBS analysis of glass and the results obtained using EPR spectroscopy. The proposed protocols prove the great advantage of DP-LIBS system for the detection of a trace copper on the surface of glasses. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Swept Impact Seismic Technique (SIST)

    USGS Publications Warehouse

    Park, C.B.; Miller, R.D.; Steeples, D.W.; Black, R.A.

    1996-01-01

    A coded seismic technique is developed that can result in a higher signal-to-noise ratio than a conventional single-pulse method does. The technique is cost-effective and time-efficient and therefore well suited for shallow-reflection surveys where high resolution and cost-effectiveness are critical. A low-power impact source transmits a few to several hundred high-frequency broad-band seismic pulses during several seconds of recording time according to a deterministic coding scheme. The coding scheme consists of a time-encoded impact sequence in which the rate of impact (cycles/s) changes linearly with time providing a broad range of impact rates. Impact times used during the decoding process are recorded on one channel of the seismograph. The coding concept combines the vibroseis swept-frequency and the Mini-Sosie random impact concepts. The swept-frequency concept greatly improves the suppression of correlation noise with much fewer impacts than normally used in the Mini-Sosie technique. The impact concept makes the technique simple and efficient in generating high-resolution seismic data especially in the presence of noise. The transfer function of the impact sequence simulates a low-cut filter with the cutoff frequency the same as the lowest impact rate. This property can be used to attenuate low-frequency ground-roll noise without using an analog low-cut filter or a spatial source (or receiver) array as is necessary with a conventional single-pulse method. Because of the discontinuous coding scheme, the decoding process is accomplished by a "shift-and-stacking" method that is much simpler and quicker than cross-correlation. The simplicity of the coding allows the mechanical design of the source to remain simple. Several different types of mechanical systems could be adapted to generate a linear impact sweep. In addition, the simplicity of the coding also allows the technique to be used with conventional acquisition systems, with only minor modifications.

  8. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  9. Computation of nonlinear least squares estimator and maximum likelihood using principles in matrix calculus

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.

    2017-11-01

    This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation

  10. Linear time relational prototype based learning.

    PubMed

    Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara

    2012-10-01

    Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.

  11. Solving deterministic non-linear programming problem using Hopfield artificial neural network and genetic programming techniques

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Ganesan, T.; Elamvazuthi, I.

    2012-11-01

    A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.

  12. Mathematical Techniques for Nonlinear System Theory.

    DTIC Science & Technology

    1981-09-01

    This report deals with research results obtained in the following areas: (1) Finite-dimensional linear system theory by algebraic methods--linear...Infinite-dimensional linear systems--realization theory of infinite-dimensional linear systems; (3) Nonlinear system theory --basic properties of

  13. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.

  14. Quantiles for Finite Mixtures of Normal Distributions

    ERIC Educational Resources Information Center

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  15. Quantitative shear wave optical coherence elastography (SW-OCE) with acoustic radiation force impulses (ARFI) induced by phase array transducer

    NASA Astrophysics Data System (ADS)

    Song, Shaozhen; Le, Nhan Minh; Wang, Ruikang K.; Huang, Zhihong

    2015-03-01

    Shear Wave Optical Coherence Elastography (SW-OCE) uses the speed of propagating shear waves to provide a quantitative measurement of localized shear modulus, making it a valuable technique for the elasticity characterization of tissues such as skin and ocular tissue. One of the main challenges in shear wave elastography is to induce a reliable source of shear wave; most of nowadays techniques use external vibrators which have several drawbacks such as limited wave propagation range and/or difficulties in non-invasive scans requiring precisions, accuracy. Thus, we propose linear phase array ultrasound transducer as a remote wave source, combined with the high-speed, 47,000-frame-per-second Shear-wave visualization provided by phase-sensitive OCT. In this study, we observed for the first time shear waves induced by a 128 element linear array ultrasound imaging transducer, while the ultrasound and OCT images (within the OCE detection range) were triggered simultaneously. Acoustic radiation force impulses are induced by emitting 10 MHz tone-bursts of sub-millisecond durations (between 50 μm - 100 μm). Ultrasound beam steering is achieved by programming appropriate phase delay, covering a lateral range of 10 mm and full OCT axial (depth) range in the imaging sample. Tissue-mimicking phantoms with agarose concentration of 0.5% and 1% was used in the SW-OCE measurements as the only imaging samples. The results show extensive improvements over the range of SW-OCE elasticity map; such improvements can also be seen over shear wave velocities in softer and stiffer phantoms, as well as determining the boundary of multiple inclusions with different stiffness. This approach opens up the feasibility to combine medical ultrasound imaging and SW-OCE for high-resolution localized quantitative measurement of tissue biomechanical property.

  16. Classification of arterial and venous cerebral vasculature based on wavelet postprocessing of CT perfusion data.

    PubMed

    Havla, Lukas; Schneider, Moritz J; Thierfelder, Kolja M; Beyer, Sebastian E; Ertl-Wagner, Birgit; Reiser, Maximilian F; Sommer, Wieland H; Dietrich, Olaf

    2016-02-01

    The purpose of this study was to propose and evaluate a new wavelet-based technique for classification of arterial and venous vessels using time-resolved cerebral CT perfusion data sets. Fourteen consecutive patients (mean age 73 yr, range 17-97) with suspected stroke but no pathology in follow-up MRI were included. A CT perfusion scan with 32 dynamic phases was performed during intravenous bolus contrast-agent application. After rigid-body motion correction, a Paul wavelet (order 1) was used to calculate voxelwise the wavelet power spectrum (WPS) of each attenuation-time course. The angiographic intensity A was defined as the maximum of the WPS, located at the coordinates T (time axis) and W (scale/width axis) within the WPS. Using these three parameters (A, T, W) separately as well as combined by (1) Fisher's linear discriminant analysis (FLDA), (2) logistic regression (LogR) analysis, or (3) support vector machine (SVM) analysis, their potential to classify 18 different arterial and venous vessel segments per subject was evaluated. The best vessel classification was obtained using all three parameters A and T and W [area under the curve (AUC): 0.953 with FLDA and 0.957 with LogR or SVM]. In direct comparison, the wavelet-derived parameters provided performance at least equal to conventional attenuation-time-course parameters. The maximum AUC obtained from the proposed wavelet parameters was slightly (although not statistically significantly) higher than the maximum AUC (0.945) obtained from the conventional parameters. A new method to classify arterial and venous cerebral vessels with high statistical accuracy was introduced based on the time-domain wavelet transform of dynamic CT perfusion data in combination with linear or nonlinear multidimensional classification techniques.

  17. Linear and nonlinear regression techniques for simultaneous and proportional myoelectric control.

    PubMed

    Hahne, J M; Biessmann, F; Jiang, N; Rehbaum, H; Farina, D; Meinecke, F C; Muller, K-R; Parra, L C

    2014-03-01

    In recent years the number of active controllable joints in electrically powered hand-prostheses has increased significantly. However, the control strategies for these devices in current clinical use are inadequate as they require separate and sequential control of each degree-of-freedom (DoF). In this study we systematically compare linear and nonlinear regression techniques for an independent, simultaneous and proportional myoelectric control of wrist movements with two DoF. These techniques include linear regression, mixture of linear experts (ME), multilayer-perceptron, and kernel ridge regression (KRR). They are investigated offline with electro-myographic signals acquired from ten able-bodied subjects and one person with congenital upper limb deficiency. The control accuracy is reported as a function of the number of electrodes and the amount and diversity of training data providing guidance for the requirements in clinical practice. The results showed that KRR, a nonparametric statistical learning method, outperformed the other methods. However, simple transformations in the feature space could linearize the problem, so that linear models could achieve similar performance as KRR at much lower computational costs. Especially ME, a physiologically inspired extension of linear regression represents a promising candidate for the next generation of prosthetic devices.

  18. Sparse 4D TomoSAR imaging in the presence of non-linear deformation

    NASA Astrophysics Data System (ADS)

    Khwaja, Ahmed Shaharyar; ćetin, Müjdat

    2018-04-01

    In this paper, we present a sparse four-dimensional tomographic synthetic aperture radar (4D TomoSAR) imaging scheme that can estimate elevation and linear as well as non-linear seasonal deformation rates of scatterers using the interferometric phase. Unlike existing sparse processing techniques that use fixed dictionaries based on a linear deformation model, we use a variable dictionary for the non-linear deformation in the form of seasonal sinusoidal deformation, in addition to the fixed dictionary for the linear deformation. We estimate the amplitude of the sinusoidal deformation using an optimization method and create the variable dictionary using the estimated amplitude. We show preliminary results using simulated data that demonstrate the soundness of our proposed technique for sparse 4D TomoSAR imaging in the presence of non-linear deformation.

  19. Derived Optimal Linear Combination Evapotranspiration (DOLCE): a global gridded synthesis ET estimate

    NASA Astrophysics Data System (ADS)

    Hobeichi, Sanaa; Abramowitz, Gab; Evans, Jason; Ukkola, Anna

    2018-02-01

    Accurate global gridded estimates of evapotranspiration (ET) are key to understanding water and energy budgets, in addition to being required for model evaluation. Several gridded ET products have already been developed which differ in their data requirements, the approaches used to derive them and their estimates, yet it is not clear which provides the most reliable estimates. This paper presents a new global ET dataset and associated uncertainty with monthly temporal resolution for 2000-2009. Six existing gridded ET products are combined using a weighting approach trained by observational datasets from 159 FLUXNET sites. The weighting method is based on a technique that provides an analytically optimal linear combination of ET products compared to site data and accounts for both the performance differences and error covariance between the participating ET products. We examine the performance of the weighting approach in several in-sample and out-of-sample tests that confirm that point-based estimates of flux towers provide information on the grid scale of these products. We also provide evidence that the weighted product performs better than its six constituent ET product members in four common metrics. Uncertainty in the ET estimate is derived by rescaling the spread of participating ET products so that their spread reflects the ability of the weighted mean estimate to match flux tower data. While issues in observational data and any common biases in participating ET datasets are limitations to the success of this approach, future datasets can easily be incorporated and enhance the derived product.

  20. Background correction in forensic photography. I. Photography of blood under conditions of non-uniform illumination or variable substrate color--theoretical aspects and proof of concept.

    PubMed

    Wagner, John H; Miskelly, Gordon M

    2003-05-01

    The combination of photographs taken at two or three wavelengths at and bracketing an absorbance peak indicative of a particular compound can lead to an image with enhanced visualization of the compound. This procedure works best for compounds with absorbance bands that are narrow compared with "average" chromophores. If necessary, the photographs can be taken with different exposure times to ensure that sufficient light from the substrate is detected at all three wavelengths. The combination of images is readily performed if the images are obtained with a digital camera and are then processed using an image processing program. Best results are obtained if linear images at the peak maximum, at a slightly shorter wavelength, and at a slightly longer wavelength are used. However, acceptable results can also be obtained under many conditions if non-linear photographs are used or if only two wavelengths (one of which is at the peak maximum) are combined. These latter conditions are more achievable by many "mid-range" digital cameras. Wavelength selection can either be by controlling the illumination (e.g., by using an alternate light source) or by use of narrow bandpass filters. The technique is illustrated using blood as the target analyte, using bands of light centered at 395, 415, and 435 nm. The extension of the method to detection of blood by fluorescence quenching is also described.

  1. Pharmaceutical Raw Material Identification Using Miniature Near-Infrared (MicroNIR) Spectroscopy and Supervised Pattern Recognition Using Support Vector Machine

    PubMed Central

    Hsiung, Chang; Pederson, Christopher G.; Zou, Peng; Smith, Valton; von Gunten, Marc; O’Brien, Nada A.

    2016-01-01

    Near-infrared spectroscopy as a rapid and non-destructive analytical technique offers great advantages for pharmaceutical raw material identification (RMID) to fulfill the quality and safety requirements in pharmaceutical industry. In this study, we demonstrated the use of portable miniature near-infrared (MicroNIR) spectrometers for NIR-based pharmaceutical RMID and solved two challenges in this area, model transferability and large-scale classification, with the aid of support vector machine (SVM) modeling. We used a set of 19 pharmaceutical compounds including various active pharmaceutical ingredients (APIs) and excipients and six MicroNIR spectrometers to test model transferability. For the test of large-scale classification, we used another set of 253 pharmaceutical compounds comprised of both chemically and physically different APIs and excipients. We compared SVM with conventional chemometric modeling techniques, including soft independent modeling of class analogy, partial least squares discriminant analysis, linear discriminant analysis, and quadratic discriminant analysis. Support vector machine modeling using a linear kernel, especially when combined with a hierarchical scheme, exhibited excellent performance in both model transferability and large-scale classification. Hence, ultra-compact, portable and robust MicroNIR spectrometers coupled with SVM modeling can make on-site and in situ pharmaceutical RMID for large-volume applications highly achievable. PMID:27029624

  2. Correntropy-based partial directed coherence for testing multivariate Granger causality in nonlinear processes

    NASA Astrophysics Data System (ADS)

    Kannan, Rohit; Tangirala, Arun K.

    2014-06-01

    Identification of directional influences in multivariate systems is of prime importance in several applications of engineering and sciences such as plant topology reconstruction, fault detection and diagnosis, and neurosciences. A spectrum of related directionality measures, ranging from linear measures such as partial directed coherence (PDC) to nonlinear measures such as transfer entropy, have emerged over the past two decades. The PDC-based technique is simple and effective, but being a linear directionality measure has limited applicability. On the other hand, transfer entropy, despite being a robust nonlinear measure, is computationally intensive and practically implementable only for bivariate processes. The objective of this work is to develop a nonlinear directionality measure, termed as KPDC, that possesses the simplicity of PDC but is still applicable to nonlinear processes. The technique is founded on a nonlinear measure called correntropy, a recently proposed generalized correlation measure. The proposed method is equivalent to constructing PDC in a kernel space where the PDC is estimated using a vector autoregressive model built on correntropy. A consistent estimator of the KPDC is developed and important theoretical results are established. A permutation scheme combined with the sequential Bonferroni procedure is proposed for testing hypothesis on absence of causality. It is demonstrated through several case studies that the proposed methodology effectively detects Granger causality in nonlinear processes.

  3. A distributed lag approach to fitting non-linear dose-response models in particulate matter air pollution time series investigations.

    PubMed

    Roberts, Steven; Martin, Michael A

    2007-06-01

    The majority of studies that have investigated the relationship between particulate matter (PM) air pollution and mortality have assumed a linear dose-response relationship and have used either a single-day's PM or a 2- or 3-day moving average of PM as the measure of PM exposure. Both of these modeling choices have come under scrutiny in the literature, the linear assumption because it does not allow for non-linearities in the dose-response relationship, and the use of the single- or multi-day moving average PM measure because it does not allow for differential PM-mortality effects spread over time. These two problems have been dealt with on a piecemeal basis with non-linear dose-response models used in some studies and distributed lag models (DLMs) used in others. In this paper, we propose a method for investigating the shape of the PM-mortality dose-response relationship that combines a non-linear dose-response model with a DLM. This combined model will be shown to produce satisfactory estimates of the PM-mortality dose-response relationship in situations where non-linear dose response models and DLMs alone do not; that is, the combined model did not systemically underestimate or overestimate the effect of PM on mortality. The combined model is applied to ten cities in the US and a pooled dose-response model formed. When fitted with a change-point value of 60 microg/m(3), the pooled model provides evidence for a positive association between PM and mortality. The combined model produced larger estimates for the effect of PM on mortality than when using a non-linear dose-response model or a DLM in isolation. For the combined model, the estimated percentage increase in mortality for PM concentrations of 25 and 75 microg/m(3) were 3.3% and 5.4%, respectively. In contrast, the corresponding values from a DLM used in isolation were 1.2% and 3.5%, respectively.

  4. A novel method for determining calibration and behavior of PVDF ultrasonic hydrophone probes in the frequency range up to 100 MHz.

    PubMed

    Bleeker, H J; Lewin, P A

    2000-01-01

    A new calibration technique for PVDF ultrasonic hydrophone probes is described. Current implementation of the technique allows determination of hydrophone frequency response between 2 and 100 MHz and is based on the comparison of theoretically predicted and experimentally determined pressure-time waveforms produced by a focused, circular source. The simulation model was derived from the time domain algorithm that solves the non linear KZK (Khokhlov-Zabolotskaya-Kuznetsov) equation describing acoustic wave propagation. The calibration technique data were experimentally verified using independent calibration procedures in the frequency range from 2 to 40 MHz using a combined time delay spectrometry and reciprocity approach or calibration data provided by the National Physical Laboratory (NPL), UK. The results of verification indicated good agreement between the results obtained using KZK and the above-mentioned independent calibration techniques from 2 to 40 MHz, with the maximum discrepancy of 18% at 30 MHz. The frequency responses obtained using different hydrophone designs, including several membrane and needle probes, are presented, and it is shown that the technique developed provides a desirable tool for independent verification of primary calibration techniques such as those based on optical interferometry. Fundamental limitations of the presented calibration method are also examined.

  5. Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique

    NASA Astrophysics Data System (ADS)

    Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua

    2018-05-01

    A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.

  6. Spacecraft nonlinear control

    NASA Technical Reports Server (NTRS)

    Sheen, Jyh-Jong; Bishop, Robert H.

    1992-01-01

    The feedback linearization technique is applied to the problem of spacecraft attitude control and momentum management with control moment gyros (CMGs). The feedback linearization consists of a coordinate transformation, which transforms the system to a companion form, and a nonlinear feedback control law to cancel the nonlinear dynamics resulting in a linear equivalent model. Pole placement techniques are then used to place the closed-loop poles. The coordinate transformation proposed here evolves from three output functions of relative degree four, three, and two, respectively. The nonlinear feedback control law is presented. Stability in a neighborhood of a controllable torque equilibrium attitude (TEA) is guaranteed and this fact is demonstrated by the simulation results. An investigation of the nonlinear control law shows that singularities exist in the state space outside the neighborhood of the controllable TEA. The nonlinear control law is simplified by a standard linearization technique and it is shown that the linearized nonlinear controller provides a natural way to select control gains for the multiple-input, multiple-output system. Simulation results using the linearized nonlinear controller show good performance relative to the nonlinear controller in the neighborhood of the TEA.

  7. Source finding in linear polarization for LOFAR, and SKA predecessor surveys, using Faraday moments

    NASA Astrophysics Data System (ADS)

    Farnes, J. S.; Heald, G.; Junklewitz, H.; Mulcahy, D. D.; Haverkorn, M.; Van Eck, C. L.; Riseley, C. J.; Brentjens, M.; Horellou, C.; Vacca, V.; Jones, D. I.; Horneffer, A.; Paladino, R.

    2018-03-01

    The optimal source-finding strategy for linear polarization data is an unsolved problem, with many inhibitive factors imposed by the technically challenging nature of polarization observations. Such an algorithm is essential for Square Kilometre Array (SKA) pathfinder surveys, such as the Multifrequency Snapshot Sky Survey with the LOw Frequency ARray (LOFAR), as data volumes are significant enough to prohibit manual inspection. We present a new strategy of `Faraday Moments' for source-finding in linear polarization with LOFAR, using the moments of the frequency-dependent full-Stokes data (i.e. the mean, standard deviation, skewness, and excess kurtosis). Through simulations of the sky, we find that moments can identify polarized sources with a high completeness: 98.5 per cent at a signal to noise of 5. While the method has low reliability, rotation measure (RM) synthesis can be applied per candidate source to filter out instrumental and spurious detections. This combined strategy will result in a complete and reliable catalogue of polarized sources that includes the full sensitivity of the observational bandwidth. We find that the technique can reduce the number of pixels on which RM Synthesis needs to be performed by a factor of ≈1 × 105 for source distributions anticipated with modern radio telescopes. Through tests on LOFAR data, we find that the technique works effectively in the presence of diffuse emission. Extensions of this method are directly applicable to other upcoming radio surveys such as the POlarization Sky Survey of the Universe's Magnetism with the Australia Square Kilometre Array Pathfinder, and the SKA itself.

  8. Quadrature, Interpolation and Observability

    NASA Technical Reports Server (NTRS)

    Hodges, Lucille McDaniel

    1997-01-01

    Methods of interpolation and quadrature have been used for over 300 years. Improvements in the techniques have been made by many, most notably by Gauss, whose technique applied to polynomials is referred to as Gaussian Quadrature. Stieltjes extended Gauss's method to certain non-polynomial functions as early as 1884. Conditions that guarantee the existence of quadrature formulas for certain collections of functions were studied by Tchebycheff, and his work was extended by others. Today, a class of functions which satisfies these conditions is called a Tchebycheff System. This thesis contains the definition of a Tchebycheff System, along with the theorems, proofs, and definitions necessary to guarantee the existence of quadrature formulas for such systems. Solutions of discretely observable linear control systems are of particular interest, and observability with respect to a given output function is defined. The output function is written as a linear combination of a collection of orthonormal functions. Orthonormal functions are defined, and their properties are discussed. The technique for evaluating the coefficients in the output function involves evaluating the definite integral of functions which can be shown to form a Tchebycheff system. Therefore, quadrature formulas for these integrals exist, and in many cases are known. The technique given is useful in cases where the method of direct calculation is unstable. The condition number of a matrix is defined and shown to be an indication of the the degree to which perturbations in data affect the accuracy of the solution. In special cases, the number of data points required for direct calculation is the same as the number required by the method presented in this thesis. But the method is shown to require more data points in other cases. A lower bound for the number of data points required is given.

  9. A gated deep inspiration breath‐hold radiation therapy technique using a linear position transducer

    PubMed Central

    Denissova, Svetlana I.; Yewondwossen, Mammo H.; Andrew, John W.; Hale, Michael E.; Murphy, Carl H.; Purcell, Scott R.

    2005-01-01

    For patients with thoracic and abdominal lesions, respiration‐induced internal organ motion and deformations during radiation therapy are limiting factors for the administration of high radiation dose. To increase the dose to the tumor and to reduce margins, tumor movement during treatment must be minimized. Currently, several types of breath‐synchronized systems are in use. These systems include respiratory gating, deep inspiration breath‐hold, active breathing control, and voluntary breath‐hold. We used a linear position transducer (LPT) to monitor changes in a patient's abdominal cross‐sectional area. The LPT tracks changes in body circumference during the respiratory cycle using a strap connected to the LPT and wrapped around the patient's torso. The LPT signal is monitored by a computer that provides a real‐time plot of the patient's breathing pattern. In our technique, we use a CT study with multiple gated acquisitions. The Philips Medical Systems Q series CT imaging system is capable of operating in conjunction with a contrast injector. This allows a patient performing the deep inspiration breath‐hold maneuver to send a signal to trigger the CT scanner acquisitions. The LPT system, when interfaced to a LINAC, allows treatment to be delivered only during deep inspiration breath‐hold periods. Treatment stops automatically if the lung volume drops from a preset value. The whole treatment can be accomplished with 1 to 3 breath‐holds. This technique has been used successfully to combine automatically gated radiation delivery with the deep inspiration breath‐hold technique. This improves the accuracy of treatment for moving tumors, providing better target coverage, sparing more healthy tissue, and saving machine time. PACS numbers: 87.53.2j, 87.57.‐s PMID:15770197

  10. Identifying intervals of temporally invariant field-aligned currents from Swarm: Assessing the validity of single-spacecraft methods

    NASA Astrophysics Data System (ADS)

    Forsyth, C.; Rae, I. J.; Mann, I. R.; Pakhotin, I. P.

    2017-03-01

    Field-aligned currents (FACs) are a fundamental component of coupled solar wind-magnetosphere-ionosphere. By assuming that FACs can be approximated by stationary infinite current sheets that do not change on the spacecraft crossing time, single-spacecraft magnetic field measurements can be used to estimate the currents flowing in space. By combining data from multiple spacecraft on similar orbits, these stationarity assumptions can be tested. In this technical report, we present a new technique that combines cross correlation and linear fitting of multiple spacecraft measurements to determine the reliability of the FAC estimates. We show that this technique can identify those intervals in which the currents estimated from single-spacecraft techniques are both well correlated and have similar amplitudes, thus meeting the spatial and temporal stationarity requirements. Using data from European Space Agency's Swarm mission from 2014 to 2015, we show that larger-scale currents (>450 km) are well correlated and have a one-to-one fit up to 50% of the time, whereas small-scale (<50 km) currents show similar amplitudes only 1% of the time despite there being a good correlation 18% of the time. It is thus imperative to examine both the correlation and amplitude of the calculated FACs in order to assess both the validity of the underlying assumptions and hence ultimately the reliability of such single-spacecraft FAC estimates.

  11. In vivo measurements of cutaneous melanin across spatial scales: using multiphoton microscopy and spatial frequency domain spectroscopy

    PubMed Central

    Saager, Rolf B.; Balu, Mihaela; Crosignani, Viera; Sharif, Ata; Durkin, Anthony J.; Kelly, Kristen M.; Tromberg, Bruce J.

    2015-01-01

    Abstract. The combined use of nonlinear optical microscopy and broadband reflectance techniques to assess melanin concentration and distribution thickness in vivo over the full range of Fitzpatrick skin types is presented. Twelve patients were measured using multiphoton microscopy (MPM) and spatial frequency domain spectroscopy (SFDS) on both dorsal forearm and volar arm, which are generally sun-exposed and non-sun-exposed areas, respectively. Both MPM and SFDS measured melanin volume fractions between ∼5% (skin type I non-sun-exposed) and 20% (skin type VI sun exposed). MPM measured epidermal (anatomical) thickness values ∼30–65  μm, while SFDS measured melanin distribution thickness based on diffuse optical path length. There was a strong correlation between melanin concentration and melanin distribution (epidermal) thickness measurements obtained using the two techniques. While SFDS does not have the ability to match the spatial resolution of MPM, this study demonstrates that melanin content as quantified using SFDS is linearly correlated with epidermal melanin as measured using MPM (R2=0.8895). SFDS melanin distribution thickness is correlated to MPM values (R2=0.8131). These techniques can be used individually and/or in combination to advance our understanding and guide therapies for pigmentation-related conditions as well as light-based treatments across a full range of skin types. PMID:26065839

  12. The Recommendations for Linear Measurement Techniques on the Measurements of Nonlinear System Parameters of a Joint.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Scott A; Catalfamo, Simone; Brake, Matthew R. W.

    2017-01-01

    In the study of the dynamics of nonlinear systems, experimental measurements often convolute the response of the nonlinearity of interest and the effects of the experimental setup. To reduce the influence of the experimental setup on the deduction of the parameters of the nonlinearity, the response of a mechanical joint is investigated under various experimental setups. These experiments first focus on quantifying how support structures and measurement techniques affect the natural frequency and damping of a linear system. The results indicate that support structures created from bungees have negligible influence on the system in terms of frequency and damping ratiomore » variations. The study then focuses on the effects of the excitation technique on the response for a linear system. The findings suggest that thinner stingers should not be used, because under the high force requirements the stinger bending modes are excited adding unwanted torsional coupling. The optimal configuration for testing the linear system is then applied to a nonlinear system in order to assess the robustness of the test configuration. Finally, recommendations are made for conducting experiments on nonlinear systems using conventional/linear testing techniques.« less

  13. Dual energy CT: How well can pseudo-monochromatic imaging reduce metal artifacts?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuchenbecker, Stefan, E-mail: stefan.kuchenbecker@dkfz.de; Faby, Sebastian; Sawall, Stefan

    2015-02-15

    Purpose: Dual Energy CT (DECT) provides so-called monoenergetic images based on a linear combination of the original polychromatic images. At certain patient-specific energy levels, corresponding to certain patient- and slice-dependent linear combination weights, e.g., E = 160 keV corresponds to α = 1.57, a significant reduction of metal artifacts may be observed. The authors aimed at analyzing the method for its artifact reduction capabilities to identify its limitations. The results are compared with raw data-based processing. Methods: Clinical DECT uses a simplified version of monochromatic imaging by linearly combining the low and the high kV images and by assigning an energymore » to that linear combination. Those pseudo-monochromatic images can be used by radiologists to obtain images with reduced metal artifacts. The authors analyzed the underlying physics and carried out a series expansion of the polychromatic attenuation equations. The resulting nonlinear terms are responsible for the artifacts, but they are not linearly related between the low and the high kV scan: A linear combination of both images cannot eliminate the nonlinearities, it can only reduce their impact. Scattered radiation yields additional noncanceling nonlinearities. This method is compared to raw data-based artifact correction methods. To quantify the artifact reduction potential of pseudo-monochromatic images, they simulated the FORBILD abdomen phantom with metal implants, and they assessed patient data sets of a clinical dual source CT system (100, 140 kV Sn) containing artifacts induced by a highly concentrated contrast agent bolus and by metal. In each case, they manually selected an optimal α and compared it to a raw data-based material decomposition in case of simulation, to raw data-based material decomposition of inconsistent rays in case of the patient data set containing contrast agent, and to the frequency split normalized metal artifact reduction in case of the metal implant. For each case, the contrast-to-noise ratio (CNR) was assessed. Results: In the simulation, the pseudo-monochromatic images yielded acceptable artifact reduction results. However, the CNR in the artifact-reduced images was more than 60% lower than in the original polychromatic images. In contrast, the raw data-based material decomposition did not significantly reduce the CNR in the virtual monochromatic images. Regarding the patient data with beam hardening artifacts and with metal artifacts from small implants the pseudo-monochromatic method was able to reduce the artifacts, again with the downside of a significant CNR reduction. More intense metal artifacts, e.g., as those caused by an artificial hip joint, could not be suppressed. Conclusions: Pseudo-monochromatic imaging is able to reduce beam hardening, scatter, and metal artifacts in some cases but it cannot remove them. In all cases, the CNR is significantly reduced, thereby rendering the method questionable, unless special post processing algorithms are implemented to restore the high CNR from the original images (e.g., by using a frequency split technique). Raw data-based dual energy decomposition methods should be preferred, in particular, because the CNR penalty is almost negligible.« less

  14. Linear and nonlinear stability of the Blasius boundary layer

    NASA Technical Reports Server (NTRS)

    Bertolotti, F. P.; Herbert, TH.; Spalart, P. R.

    1992-01-01

    Two new techniques for the study of the linear and nonlinear instability in growing boundary layers are presented. The first technique employs partial differential equations of parabolic type exploiting the slow change of the mean flow, disturbance velocity profiles, wavelengths, and growth rates in the streamwise direction. The second technique solves the Navier-Stokes equation for spatially evolving disturbances using buffer zones adjacent to the inflow and outflow boundaries. Results of both techniques are in excellent agreement. The linear and nonlinear development of Tollmien-Schlichting (TS) waves in the Blasius boundary layer is investigated with both techniques and with a local procedure based on a system of ordinary differential equations. The results are compared with previous work and the effects of non-parallelism and nonlinearity are clarified. The effect of nonparallelism is confirmed to be weak and, consequently, not responsible for the discrepancies between measurements and theoretical results for parallel flow.

  15. A data-driven approach for evaluating multi-modal therapy in traumatic brain injury

    PubMed Central

    Haefeli, Jenny; Ferguson, Adam R.; Bingham, Deborah; Orr, Adrienne; Won, Seok Joon; Lam, Tina I.; Shi, Jian; Hawley, Sarah; Liu, Jialing; Swanson, Raymond A.; Massa, Stephen M.

    2017-01-01

    Combination therapies targeting multiple recovery mechanisms have the potential for additive or synergistic effects, but experimental design and analyses of multimodal therapeutic trials are challenging. To address this problem, we developed a data-driven approach to integrate and analyze raw source data from separate pre-clinical studies and evaluated interactions between four treatments following traumatic brain injury. Histologic and behavioral outcomes were measured in 202 rats treated with combinations of an anti-inflammatory agent (minocycline), a neurotrophic agent (LM11A-31), and physical therapy consisting of assisted exercise with or without botulinum toxin-induced limb constraint. Data was curated and analyzed in a linked workflow involving non-linear principal component analysis followed by hypothesis testing with a linear mixed model. Results revealed significant benefits of the neurotrophic agent LM11A-31 on learning and memory outcomes after traumatic brain injury. In addition, modulations of LM11A-31 effects by co-administration of minocycline and by the type of physical therapy applied reached statistical significance. These results suggest a combinatorial effect of drug and physical therapy interventions that was not evident by univariate analysis. The study designs and analytic techniques applied here form a structured, unbiased, internally validated workflow that may be applied to other combinatorial studies, both in animals and humans. PMID:28205533

  16. A data-driven approach for evaluating multi-modal therapy in traumatic brain injury.

    PubMed

    Haefeli, Jenny; Ferguson, Adam R; Bingham, Deborah; Orr, Adrienne; Won, Seok Joon; Lam, Tina I; Shi, Jian; Hawley, Sarah; Liu, Jialing; Swanson, Raymond A; Massa, Stephen M

    2017-02-16

    Combination therapies targeting multiple recovery mechanisms have the potential for additive or synergistic effects, but experimental design and analyses of multimodal therapeutic trials are challenging. To address this problem, we developed a data-driven approach to integrate and analyze raw source data from separate pre-clinical studies and evaluated interactions between four treatments following traumatic brain injury. Histologic and behavioral outcomes were measured in 202 rats treated with combinations of an anti-inflammatory agent (minocycline), a neurotrophic agent (LM11A-31), and physical therapy consisting of assisted exercise with or without botulinum toxin-induced limb constraint. Data was curated and analyzed in a linked workflow involving non-linear principal component analysis followed by hypothesis testing with a linear mixed model. Results revealed significant benefits of the neurotrophic agent LM11A-31 on learning and memory outcomes after traumatic brain injury. In addition, modulations of LM11A-31 effects by co-administration of minocycline and by the type of physical therapy applied reached statistical significance. These results suggest a combinatorial effect of drug and physical therapy interventions that was not evident by univariate analysis. The study designs and analytic techniques applied here form a structured, unbiased, internally validated workflow that may be applied to other combinatorial studies, both in animals and humans.

  17. Experiments on nonlinear acoustic landmine detection: Tuning curve studies of soil-mine and soil-mass oscillators

    NASA Astrophysics Data System (ADS)

    Korman, Murray S.; Witten, Thomas R.; Fenneman, Douglas J.

    2004-10-01

    Donskoy [SPIE Proc. 3392, 211-217 (1998); 3710, 239-246 (1999)] has suggested a nonlinear technique that is insensitive to relatively noncompliant targets that can detect an acoustically compliant buried mine. Airborne sound at two primary frequencies eventually causes interactions with the soil and mine generating combination frequencies that can affect the vibration velocity at the surface. In current experiments, f1 and f2 are closely spaced near a mine resonance and a laser Doppler vibrometer profiles the surface. In profiling, certain combination frequencies have a much greater contrast ratio than the linear profiles at f1 and f2-but off the mine some nonlinearity exists. Near resonance, the bending (a softening) of a family of tuning curves (over the mine) exhibits a linear relationship between peak velocity and corresponding frequency, which is characteristic of nonlinear mesoscopic elasticity effects that are observed in geomaterials like rocks or granular media. Results are presented for inert plastic VS 1.6, VS 2.2 and M14 mines buried 3.6 cm in loose soil. Tuning curves for a rigid mass plate resting on a soil layer exhibit similar results, suggesting that nonresonant conditions off the mine are desirable. [Work supported by U.S. Army RDECOM, CERDEC, NVESD, Fort Belvoir, VA.

  18. On Some Separated Algorithms for Separable Nonlinear Least Squares Problems.

    PubMed

    Gan, Min; Chen, C L Philip; Chen, Guang-Yong; Chen, Long

    2017-10-03

    For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman's simplified one; and 4) the combination of VP algorithm and Levenberg-Marquardt method is more effective than the combination of VP algorithm and Gauss-Newton method.

  19. A sensitive turn on fluorescent probe for detection of biothiols using MnO2@carbon dots nanocomposites

    NASA Astrophysics Data System (ADS)

    Garg, Dimple; Mehta, Akansha; Mishra, Amit; Basu, Soumen

    2018-03-01

    Presently, the combination of carbon quantum dots (CQDs) and metal oxide nanostructures in one frame are being considered for the sensing of purine compounds. In this work, a combined system of CQDs and MnO2 nanostructures was used for the detection of anticancer drugs, 6-Thioguanine (6-TG) and 6-Mercaptopurine (6-MP). The CQDs were synthesized through microwave synthesizer and the MnO2 nanostructures (nanoflowers and nanosheets) were synthesized using facile hydrothermal technique. The CQDs exhibited excellent fluorescence emission at 420 nm when excited at 320 nm wavelength. By combining CQDs and MnO2 nanostructures, quenching of fluorescence was observed which was attributed to fluorescence resonance energy transfer (FRET) mechanism, where CQDs act as electron donor and MnO2 act as acceptor. This fluorescence quenching behaviour disappeared on the addition of 6-TG and 6-MP due to the formation of Mn-S bond. The detection limit for 6-TG (0.015 μM) and 6-MP (0.014 μM) was achieved with the linear range of concentration (0-50 μM) using both MnO2 nanoflowers and nanosheets. Moreover, the as-prepared fluorescence-sensing technique was successfully employed for the detection of bio-thiol group in enapril drug. Thus a facile, cost-effective and benign chemistry approach for biomolecule detection was designed.

  20. A methodology for uncertainty quantification in quantitative technology valuation based on expert elicitation

    NASA Astrophysics Data System (ADS)

    Akram, Muhammad Farooq Bin

    The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to- be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system makes it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is the most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.

  1. Empirical Mode Decomposition and Neural Networks on FPGA for Fault Diagnosis in Induction Motors

    PubMed Central

    Garcia-Perez, Arturo; Osornio-Rios, Roque Alfredo; Romero-Troncoso, Rene de Jesus

    2014-01-01

    Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD) for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE)-based frequency estimator and a feed forward neural network (FFNN)-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA) allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC) solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications. PMID:24678281

  2. Image processing pipeline for segmentation and material classification based on multispectral high dynamic range polarimetric images.

    PubMed

    Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita

    2017-11-27

    We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.

  3. Empirical mode decomposition and neural networks on FPGA for fault diagnosis in induction motors.

    PubMed

    Camarena-Martinez, David; Valtierra-Rodriguez, Martin; Garcia-Perez, Arturo; Osornio-Rios, Roque Alfredo; Romero-Troncoso, Rene de Jesus

    2014-01-01

    Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD) for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE)-based frequency estimator and a feed forward neural network (FFNN)-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA) allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC) solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications.

  4. Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing Units

    PubMed Central

    Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.

    2010-01-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792

  5. Comparative Performance of Linear Multielectrode Probes and Single-Tip Electrodes for Intracortical Microstimulation and Single-Neuron Recording in Macaque Monkey

    PubMed Central

    Ferroni, Carolina G.; Maranesi, Monica; Livi, Alessandro; Lanzilotto, Marco; Bonini, Luca

    2017-01-01

    Intracortical microstimulation (ICMS) is one of the most widely employed techniques for providing causal evidence of the relationship between neuronal activity and specific motor, perceptual, or even cognitive functions. In recent years, several new types of linear multielectrode silicon probes have been developed, allowing researchers to sample neuronal activity at different depths along the same cortical site simultaneously and with high spatial precision. Nevertheless, silicon multielectrode probes have been rarely employed for ICMS studies and, more importantly, it is unknown whether and to what extent they can be used for combined recording and stimulation experiments. Here, we addressed these issues during both acute and chronic conditions. First, we compared the behavioral outcomes of ICMS delivered to the hand region of a monkey's motor cortex with multielectrode silicon probes, commercially available multisite stainless-steel probes and single-tip glass-coated tungsten microelectrodes. The results for all three of the probes were reliable and similar. Furthermore, we tested the impact of long-train ICMS delivered through chronically implanted silicon probes at different time intervals, from 1 to 198 days after ICMS sessions, showing that although the number of recorded neurons decreased over time, in line with previous studies, ICMS did not alter silicon probes' recording capabilities. These findings indicate that in ICMS experiments, the performance of linear multielectrode silicon probes is comparable to that of both single-tip and multielectrode stainless-steel probes, suggesting that the silicon probes can be successfully used for combined recording and stimulation studies in chronic conditions. PMID:29187815

  6. Comparative Performance of Linear Multielectrode Probes and Single-Tip Electrodes for Intracortical Microstimulation and Single-Neuron Recording in Macaque Monkey.

    PubMed

    Ferroni, Carolina G; Maranesi, Monica; Livi, Alessandro; Lanzilotto, Marco; Bonini, Luca

    2017-01-01

    Intracortical microstimulation (ICMS) is one of the most widely employed techniques for providing causal evidence of the relationship between neuronal activity and specific motor, perceptual, or even cognitive functions. In recent years, several new types of linear multielectrode silicon probes have been developed, allowing researchers to sample neuronal activity at different depths along the same cortical site simultaneously and with high spatial precision. Nevertheless, silicon multielectrode probes have been rarely employed for ICMS studies and, more importantly, it is unknown whether and to what extent they can be used for combined recording and stimulation experiments. Here, we addressed these issues during both acute and chronic conditions. First, we compared the behavioral outcomes of ICMS delivered to the hand region of a monkey's motor cortex with multielectrode silicon probes, commercially available multisite stainless-steel probes and single-tip glass-coated tungsten microelectrodes. The results for all three of the probes were reliable and similar. Furthermore, we tested the impact of long-train ICMS delivered through chronically implanted silicon probes at different time intervals, from 1 to 198 days after ICMS sessions, showing that although the number of recorded neurons decreased over time, in line with previous studies, ICMS did not alter silicon probes' recording capabilities. These findings indicate that in ICMS experiments, the performance of linear multielectrode silicon probes is comparable to that of both single-tip and multielectrode stainless-steel probes, suggesting that the silicon probes can be successfully used for combined recording and stimulation studies in chronic conditions.

  7. Smoothing-based compressed state Kalman filter for joint state-parameter estimation: Applications in reservoir characterization and CO2 storage monitoring

    NASA Astrophysics Data System (ADS)

    Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.

    2017-08-01

    The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.

  8. KALREF—A Kalman filter and time series approach to the International Terrestrial Reference Frame realization

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoping; Abbondanza, Claudio; Altamimi, Zuheir; Chin, T. Mike; Collilieux, Xavier; Gross, Richard S.; Heflin, Michael B.; Jiang, Yan; Parker, Jay W.

    2015-05-01

    The current International Terrestrial Reference Frame is based on a piecewise linear site motion model and realized by reference epoch coordinates and velocities for a global set of stations. Although linear motions due to tectonic plates and glacial isostatic adjustment dominate geodetic signals, at today's millimeter precisions, nonlinear motions due to earthquakes, volcanic activities, ice mass losses, sea level rise, hydrological changes, and other processes become significant. Monitoring these (sometimes rapid) changes desires consistent and precise realization of the terrestrial reference frame (TRF) quasi-instantaneously. Here, we use a Kalman filter and smoother approach to combine time series from four space geodetic techniques to realize an experimental TRF through weekly time series of geocentric coordinates. In addition to secular, periodic, and stochastic components for station coordinates, the Kalman filter state variables also include daily Earth orientation parameters and transformation parameters from input data frames to the combined TRF. Local tie measurements among colocated stations are used at their known or nominal epochs of observation, with comotion constraints applied to almost all colocated stations. The filter/smoother approach unifies different geodetic time series in a single geocentric frame. Fragmented and multitechnique tracking records at colocation sites are bridged together to form longer and coherent motion time series. While the time series approach to TRF reflects the reality of a changing Earth more closely than the linear approximation model, the filter/smoother is computationally powerful and flexible to facilitate incorporation of other data types and more advanced characterization of stochastic behavior of geodetic time series.

  9. Accounting for large deformations in real-time simulations of soft tissues based on reduced-order models.

    PubMed

    Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F

    2012-01-01

    Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  10. Non-Linearity in Wide Dynamic Range CMOS Image Sensors Utilizing a Partial Charge Transfer Technique.

    PubMed

    Shafie, Suhaidi; Kawahito, Shoji; Halin, Izhal Abdul; Hasan, Wan Zuha Wan

    2009-01-01

    The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity with respect to the incident light. In this paper, an analysis of the non-linearity in partial charge transfer technique has been carried, and the relationship between dynamic range and the non-linearity is studied. The results show that the non-linearity is caused by two factors, namely the current diffusion, which has an exponential relation with the potential barrier, and the initial condition of photodiodes in which it shows that the error in the high illumination region increases as the ratio of the long to the short accumulation time raises. Moreover, the increment of the saturation level of photodiodes also increases the error in the high illumination region.

  11. Analysis of Learning Curve Fitting Techniques.

    DTIC Science & Technology

    1987-09-01

    1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied

  12. Modulation/demodulation techniques for satellite communications. Part 2: Advanced techniques. The linear channel

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1982-01-01

    A theory is presented for deducing and predicting the performance of transmitter/receivers for bandwidth efficient modulations suitable for use on the linear satellite channel. The underlying principle used is the development of receiver structures based on the maximum-likelihood decision rule. The application of the performance prediction tools, e.g., channel cutoff rate and bit error probability transfer function bounds to these modulation/demodulation techniques.

  13. Dynamics of shaping ultrashort optical dissipative solitary pulses in the actively mode-locked semiconductor laser with an external long-haul single-mode fiber cavity

    NASA Astrophysics Data System (ADS)

    Shcherbakov, Alexandre S.; Moreno Zarate, Pedro

    2010-02-01

    We describe the conditions of shaping regular trains of optical dissipative solitary pulses, excited by multi-pulse sequences of periodic modulating signals, in the actively mode-locked semiconductor laser heterostructure with an external long-haul single-mode silicon fiber exhibiting square-law dispersion, cubic Kerr nonlinearity, and linear optical losses. The presented model for the analysis includes three principal contributions associated with the modulated gain, optical losses, as well as linear and nonlinear phase shifts. In fact, the trains of optical dissipative solitary pulses appear within simultaneous presenting and a balance of mutually compensating interactions between the second-order dispersion and cubic-law Kerr nonlinearity as well as between active medium gain and linear optical losses in the combined cavity. Within such a model, a contribution of the nonlinear Ginzburg-Landau operator to shaping the parameters of optical dissipative solitary pulses is described via exploiting an approximate variational procedure involving the technique of trial functions. Finally, the results of the illustrating proof-of-principle experiments are briefly presented and discussed in terms of optical dissipative solitary pulses.

  14. New styryl phenanthroline derivatives as model D-π-A-π-D materials for non-linear optics.

    PubMed

    Bonaccorso, Carmela; Cesaretti, Alessio; Elisei, Fausto; Mencaroni, Letizia; Spalletti, Anna; Fortuna, Cosimo Gianluca

    2018-04-27

    Four novel push-pull systems combining a central phenanthroline acceptor moiety and two substituted benzene rings, as a part of the conjugated π-system between the donor and the acceptor moieties, have been synthetized through a straightforward and efficient one-step synthetic procedure. The chromophores display high fluorescence and a peculiar fluorosolvatochromic behavior. Ultrafast investigation by means of state-of-the-art femtosecond-resolved transient absorption and fluorescence up-conversion spectroscopies allowed the role of intramolecular charge transfer (ICT) states to be evidenced, also revealing the crucial role played by both the polarity and proticity of the medium on the excited state dynamics of the chromophores. The ICT processes, responsible for the solvatochromism, also lead to interesting non-linear optical (NLO) properties: namely great two photon absorption cross-sections (hundreds of GM), investigated by the Two Photon Excited Fluorescence (TPEF) technique, and large second order hyperpolarizability coefficients, estimated through a convenient solvatochromic method. These features thus make the investigated styryl phenanthroline molecules model D-π-A-π-D compounds for non-linear optical applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Simulations and measurements of hot-electron generation driven by the multibeam two-plasmon-decay instability

    NASA Astrophysics Data System (ADS)

    Follett, R. K.; Myatt, J. F.; Shaw, J. G.; Michel, D. T.; Solodov, A. A.; Edgell, D. H.; Yaakobi, B.; Froula, D. H.

    2017-10-01

    Multibeam experiments relevant to direct-drive inertial confinement fusion show the importance of nonlinear saturation mechanisms in the common-wave two-plasmon-decay (TPD) instability. Planar-target experiments on the OMEGA laser used hard x-ray measurements to study the influence of the linear common-wave growth rate on TPD-driven hot-electron production in two drive-beam configurations and over a range of overlapped laser intensities from 3.6 to 15.2 × 1014 W/cm2. The beam configuration with the larger linear common-wave growth rate had a lower intensity threshold for the onset of hot-electron production, but the linear growth rate made no significant impact on hot-electron production at high intensities. The experiments were modeled in 3-D using a hybrid code LPSE (laser plasma simulation environment) that combines a wave solver with a particle tracker to self-consistently calculate the electron velocity distribution and evolve electron Landau damping. Good quantitative agreement was obtained between the simulated and measured hot-electron distributions using a novel technique to account for macroscopic spatial and temporal variations that were present in the experiments.

  16. Simulations and measurements of hot-electron generation driven by the multibeam two-plasmon-decay instability

    DOE PAGES

    Follett, R. K.; Myatt, J. F.; Shaw, J. G.; ...

    2017-10-30

    We report that multiple-beam experiments relevant to direct-drive inertial confinement fusion show the importance of nonlinear saturation mechanisms in the common-wave two-plasmon-decay (TPD) instability. Planar target experiments on the OMEGA laser used hard-x-ray measurements to study the influence of the linear common-wave growth rate on TPD driven hot-electron production in two drive beam configurations and over a range of overlapped laser intensities from 3.6 to 15.2 x 10 14 W/cm 2. The beam configuration with the larger linear common-wave growth rate had a lower intensity threshold for the onset of hot-electron production, but the linear growth rate did not havemore » a significant impact on hot-electron production at high intensities. The experiments were modeled in 3-D using a hybrid code (LPSE) that combines a wave solver with a particle tracker to self-consistently calculate the electron velocity distribution and evolve electron Landau damping. Finally, good quantitative agreement was obtained between the simulated and measured hotel-electron distributions using a novel technique to account for macroscopic spatial and temporal variations that are present in the experiments.« less

  17. Predictive models reduce talent development costs in female gymnastics.

    PubMed

    Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle

    2017-04-01

    This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.

  18. Simulations and measurements of hot-electron generation driven by the multibeam two-plasmon-decay instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follett, R. K.; Myatt, J. F.; Shaw, J. G.

    We report that multiple-beam experiments relevant to direct-drive inertial confinement fusion show the importance of nonlinear saturation mechanisms in the common-wave two-plasmon-decay (TPD) instability. Planar target experiments on the OMEGA laser used hard-x-ray measurements to study the influence of the linear common-wave growth rate on TPD driven hot-electron production in two drive beam configurations and over a range of overlapped laser intensities from 3.6 to 15.2 x 10 14 W/cm 2. The beam configuration with the larger linear common-wave growth rate had a lower intensity threshold for the onset of hot-electron production, but the linear growth rate did not havemore » a significant impact on hot-electron production at high intensities. The experiments were modeled in 3-D using a hybrid code (LPSE) that combines a wave solver with a particle tracker to self-consistently calculate the electron velocity distribution and evolve electron Landau damping. Finally, good quantitative agreement was obtained between the simulated and measured hotel-electron distributions using a novel technique to account for macroscopic spatial and temporal variations that are present in the experiments.« less

  19. Computer Program For Linear Algebra

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.; Hanson, R. J.

    1987-01-01

    Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.

  20. Active distribution network planning considering linearized system loss

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Wang, Mingqiang; Xu, Hao

    2018-02-01

    In this paper, various distribution network planning techniques with DGs are reviewed, and a new distribution network planning method is proposed. It assumes that the location of DGs and the topology of the network are fixed. The proposed model optimizes the capacities of DG and the optimal distribution line capacity simultaneously by a cost/benefit analysis and the benefit is quantified by the reduction of the expected interruption cost. Besides, the network loss is explicitly analyzed in the paper. For simplicity, the network loss is appropriately simplified as a quadratic function of difference of voltage phase angle. Then it is further piecewise linearized. In this paper, a piecewise linearization technique with different segment lengths is proposed. To validate its effectiveness and superiority, the proposed distribution network planning model with elaborate linearization technique is tested on the IEEE 33-bus distribution network system.

  1. Chiropractic biophysics technique: a linear algebra approach to posture in chiropractic.

    PubMed

    Harrison, D D; Janik, T J; Harrison, G R; Troyanovich, S; Harrison, D E; Harrison, S O

    1996-10-01

    This paper discusses linear algebra as applied to human posture in chiropractic, specifically chiropractic biophysics technique (CBP). Rotations, reflections and translations are geometric functions studied in vector spaces in linear algebra. These mathematical functions are termed rigid body transformations and are applied to segmental spinal movement in the literature. Review of the literature indicates that these linear algebra concepts have been used to describe vertebral motion. However, these rigid body movers are presented here as applying to the global postural movements of the head, thoracic cage and pelvis. The unique inverse functions of rotations, reflections and translations provide a theoretical basis for making postural corrections in neutral static resting posture. Chiropractic biophysics technique (CBP) uses these concepts in examination procedures, manual spinal manipulation, instrument assisted spinal manipulation, postural exercises, extension traction and clinical outcome measures.

  2. SUBOPT: A CAD program for suboptimal linear regulators

    NASA Technical Reports Server (NTRS)

    Fleming, P. J.

    1985-01-01

    An interactive software package which provides design solutions for both standard linear quadratic regulator (LQR) and suboptimal linear regulator problems is described. Intended for time-invariant continuous systems, the package is easily modified to include sampled-data systems. LQR designs are obtained by established techniques while the large class of suboptimal problems containing controller and/or performance index options is solved using a robust gradient minimization technique. Numerical examples demonstrate features of the package and recent developments are described.

  3. Enhanced performance for the analysis of prostaglandins and thromboxanes by liquid chromatography-tandem mass spectrometry using a new atmospheric pressure ionization source.

    PubMed

    Lubin, Arnaud; Geerinckx, Suzy; Bajic, Steve; Cabooter, Deirdre; Augustijns, Patrick; Cuyckens, Filip; Vreeken, Rob J

    2016-04-01

    Eicosanoids, including prostaglandins and thromboxanes are lipid mediators synthetized from polyunsaturated fatty acids. They play an important role in cell signaling and are often reported as inflammatory markers. LC-MS/MS is the technique of choice for the analysis of these compounds, often in combination with advanced sample preparation techniques. Here we report a head to head comparison between an electrospray ionization source (ESI) and a new atmospheric pressure ionization source (UniSpray). The performance of both interfaces was evaluated in various matrices such as human plasma, pig colon and mouse colon. The UniSpray source shows an increase in method sensitivity up to a factor 5. Equivalent to better linearity and repeatability on various matrices as well as an increase in signal intensity were observed in comparison to ESI. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. A novel method for qualitative analysis of edible oil oxidation using an electronic nose.

    PubMed

    Xu, Lirong; Yu, Xiuzhu; Liu, Lei; Zhang, Rui

    2016-07-01

    An electronic nose (E-nose) was used for rapid assessment of the degree of oxidation in edible oils. Peroxide and acid values of edible oil samples were analyzed using data obtained by the American Oil Chemists' Society (AOCS) Official Method for reference. Qualitative discrimination between non-oxidized and oxidized oils was conducted using the E-nose technique developed in combination with cluster analysis (CA), principal component analysis (PCA), and linear discriminant analysis (LDA). The results from CA, PCA and LDA indicated that the E-nose technique could be used for differentiation of non-oxidized and oxidized oils. LDA produced slightly better results than CA and PCA. The proposed approach can be used as an alternative to AOCS Official Method as an innovative tool for rapid detection of edible oil oxidation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Charge transport mechanism in lead oxide revealed by CELIV technique

    PubMed Central

    Semeniuk, O.; Juska, G.; Oelerich, J.-O.; Wiemer, M.; Baranovskii, S. D.; Reznik, A.

    2016-01-01

    Although polycrystalline lead oxide (PbO) belongs to the most promising photoconductors for optoelectronic and large area detectors applications, the charge transport mechanism in this material still remains unclear. Combining the conventional time-of-flight and the photo-generated charge extraction by linear increasing voltage (photo-CELIV) techniques, we investigate the transport of holes which are shown to be the faster carriers in poly-PbO. Experimentally measured temperature and electric field dependences of the hole mobility suggest a highly dispersive transport. In order to analyze the transport features quantitatively, the theory of the photo-CELIV is extended to account for the dispersive nature of charge transport. While in other materials with dispersive transport the amount of dispersion usually depends on temperature, this is not the case in poly-PbO, which evidences that dispersive transport is caused by the spatial inhomogeneity of the material and not by the energy disorder. PMID:27628537

  6. All-in-one model for designing optimal water distribution pipe networks

    NASA Astrophysics Data System (ADS)

    Aklog, Dagnachew; Hosoi, Yoshihiko

    2017-05-01

    This paper discusses the development of an easy-to-use, all-in-one model for designing optimal water distribution networks. The model combines different optimization techniques into a single package in which a user can easily choose what optimizer to use and compare the results of different optimizers to gain confidence in the performances of the models. At present, three optimization techniques are included in the model: linear programming (LP), genetic algorithm (GA) and a heuristic one-by-one reduction method (OBORM) that was previously developed by the authors. The optimizers were tested on a number of benchmark problems and performed very well in terms of finding optimal or near-optimal solutions with a reasonable computation effort. The results indicate that the model effectively addresses the issues of complexity and limited performance trust associated with previous models and can thus be used for practical purposes.

  7. Evaluation of trade-offs in costs and environmental impacts for returnable packaging implementation

    NASA Astrophysics Data System (ADS)

    Jarupan, Lerpong; Kamarthi, Sagar V.; Gupta, Surendra M.

    2004-02-01

    The main thrust of returnable packaging these days is to provide logistical services through transportation and distribution of products and be environmentally friendly. Returnable packaging and reverse logistics concepts have converged to mitigate the adverse effect of packaging materials entering the solid waste stream. Returnable packaging must be designed by considering the trade-offs between costs and environmental impact to satisfy manufacturers and environmentalists alike. The cost of returnable packaging entails such items as materials, manufacturing, collection, storage and disposal. Environmental impacts are explicitly linked with solid waste, air pollution, and water pollution. This paper presents a multi-criteria evaluation technique to assist decision-makers for evaluating the trade-offs in costs and environmental impact during the returnable packaging design process. The proposed evaluation technique involves a combination of multiple objective integer linear programming and analytic hierarchy process. A numerical example is used to illustrate the methodology.

  8. Reference-free fatigue crack detection using nonlinear ultrasonic modulation under various temperature and loading conditions

    NASA Astrophysics Data System (ADS)

    Lim, Hyung Jin; Sohn, Hoon; DeSimio, Martin P.; Brown, Kevin

    2014-04-01

    This study presents a reference-free fatigue crack detection technique using nonlinear ultrasonic modulation. When low frequency (LF) and high frequency (HF) inputs generated by two surface-mounted lead zirconate titanate (PZT) transducers are applied to a structure, the presence of a fatigue crack can provide a mechanism for nonlinear ultrasonic modulation and create spectral sidebands around the frequency of the HF signal. The crack-induced spectral sidebands are isolated using a combination of linear response subtraction (LRS), synchronous demodulation (SD) and continuous wavelet transform (CWT) filtering. Then, a sequential outlier analysis is performed on the extracted sidebands to identify the crack presence without referring any baseline data obtained from the intact condition of the structure. Finally, the robustness of the proposed technique is demonstrated using actual test data obtained from simple aluminum plate and complex aircraft fitting-lug specimens under varying temperature and loading variations.

  9. A New Stochastic Technique for Painlevé Equation-I Using Neural Network Optimized with Swarm Intelligence

    PubMed Central

    Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor

    2012-01-01

    A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371

  10. Linear Programming for Vocational Education Planning. Interim Report.

    ERIC Educational Resources Information Center

    Young, Robert C.; And Others

    The purpose of the paper is to define for potential users of vocational education management information systems a quantitative analysis technique and its utilization to facilitate more effective planning of vocational education programs. Defining linear programming (LP) as a management technique used to solve complex resource allocation problems…

  11. FH/MFSK performance in multitone jamming

    NASA Technical Reports Server (NTRS)

    Levitt, B. K.

    1985-01-01

    The performance of frequency-hopped (FH) M-ary frequency-shift keyed (MFSK) signals in partial-band noise was analyzed in the open literature. The previous research is extended to the usually more effective class of multitone jamming. Some objectives researched are: (1) To categorize several different multitone jamming strategies; (2) To analyze the performance of FH/MSFK signaling, both uncoded with diversity, assuming a noncoherent energy detection metric with linear combining and perfect jamming state side information, in the presence of worst case interference for each of these multitone categories; and (3) To compare the effectiveness of the various multitone jamming techniques, and contrast the results with the partial band noise jamming case.

  12. Optimization model of vaccination strategy for dengue transmission

    NASA Astrophysics Data System (ADS)

    Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.

    2014-02-01

    Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.

  13. Generation of helical Ince-Gaussian beams: beam-shaping with a liquid crystal display

    NASA Astrophysics Data System (ADS)

    Davis, Jeffrey A.; Bentley, Joel B.; Bandres, Miguel A.; Gutiérrez-Vega, Julio C.

    2006-08-01

    We review the three types of laser beams - Hermite-Gaussian (HG), Laguerre-Gaussian (LG) and the newly discovered Ince-Gaussian (IG) beams. We discuss the helical forms of the LG and IG beams that consist of linear combinations of the even and odd solutions and form a number of vortices that are useful for optical trapping applications. We discuss how to generate these beams by encoding the desired amplitude and phase onto a single parallel-aligned liquid crystal display (LCD). We introduce a novel interference technique where we generate both the object and reference beams using a single LCD and show the vortex interference patterns.

  14. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  15. Electrical birefringence tuning of VCSELs

    NASA Astrophysics Data System (ADS)

    Pusch, Tobias; Lindemann, Markus; Gerhardt, Nils C.; Hofmann, Martin R.; Michalzik, Rainer

    2018-02-01

    The birefringence splitting B, which is the frequency difference between the two fundamental linear polarization modes in vertical-cavity surface-emitting lasers (VCSELs), is the key parameter determining the polarization dynamics of spin-VCSELs that can be much faster than the intensity dynamics. For easy handling and control, electrical tuning of B is favored. This was realized in an integrated chip by thermally induced strain via asymmetric heating with a birefringence tuning range of 45 GHz. In this paper we present our work on VCSEL structures mounted on piezoelectric transducers for strain generation. Furthermore we show a combination of both techniques, namely VCSELs with piezo-thermal birefringence tunability.

  16. On stochastic control and optimal measurement strategies. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kramer, L. C.

    1971-01-01

    The control of stochastic dynamic systems is studied with particular emphasis on those which influence the quality or nature of the measurements which are made to effect control. Four main areas are discussed: (1) the meaning of stochastic optimality and the means by which dynamic programming may be applied to solve a combined control/measurement problem; (2) a technique by which it is possible to apply deterministic methods, specifically the minimum principle, to the study of stochastic problems; (3) the methods described are applied to linear systems with Gaussian disturbances to study the structure of the resulting control system; and (4) several applications are considered.

  17. Efficient techniques for forced response involving linear modal components interconnected by discrete nonlinear connection elements

    NASA Astrophysics Data System (ADS)

    Avitabile, Peter; O'Callahan, John

    2009-01-01

    Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.

  18. Feasibility of combining linear theory and impact theory methods for the analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1978-01-01

    The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.

  19. Analysis of periodically excited non-linear systems by a parametric continuation technique

    NASA Astrophysics Data System (ADS)

    Padmanabhan, C.; Singh, R.

    1995-07-01

    The dynamic behavior and frequency response of harmonically excited piecewise linear and/or non-linear systems has been the subject of several recent investigations. Most of the prior studies employed harmonic balance or Galerkin schemes, piecewise linear techniques, analog simulation and/or direct numerical integration (digital simulation). Such techniques are somewhat limited in their ability to predict all of the dynamic characteristics, including bifurcations leading to the occurrence of unstable, subharmonic, quasi-periodic and/or chaotic solutions. To overcome this problem, a parametric continuation scheme, based on the shooting method, is applied specifically to a periodically excited piecewise linear/non-linear system, in order to improve understanding as well as to obtain the complete dynamic response. Parameter regions exhibiting bifurcations to harmonic, subharmonic or quasi-periodic solutions are obtained quite efficiently and systematically. Unlike other techniques, the proposed scheme can follow period-doubling bifurcations, and with some modifications obtain stable quasi-periodic solutions and their bifurcations. This knowledge is essential in establishing conditions for the occurrence of chaotic oscillations in any non-linear system. The method is first validated through the Duffing oscillator example, the solutions to which are also obtained by conventional one-term harmonic balance and perturbation methods. The second example deals with a clearance non-linearity problem for both harmonic and periodic excitations. Predictions from the proposed scheme match well with available analog simulation data as well as with multi-term harmonic balance results. Potential savings in computational time over direct numerical integration is demonstrated for some of the example cases. Also, this work has filled in some of the solution regimes for an impact pair, which were missed previously in the literature. Finally, one main limitation associated with the proposed procedure is discussed.

  20. A three-wavelength multi-channel brain functional imager based on digital lock-in photon-counting technique

    NASA Astrophysics Data System (ADS)

    Ding, Xuemei; Wang, Bingyuan; Liu, Dongyuan; Zhang, Yao; He, Jie; Zhao, Huijuan; Gao, Feng

    2018-02-01

    During the past two decades there has been a dramatic rise in the use of functional near-infrared spectroscopy (fNIRS) as a neuroimaging technique in cognitive neuroscience research. Diffuse optical tomography (DOT) and optical topography (OT) can be employed as the optical imaging techniques for brain activity investigation. However, most current imagers with analogue detection are limited by sensitivity and dynamic range. Although photon-counting detection can significantly improve detection sensitivity, the intrinsic nature of sequential excitations reduces temporal resolution. To improve temporal resolution, sensitivity and dynamic range, we develop a multi-channel continuous-wave (CW) system for brain functional imaging based on a novel lock-in photon-counting technique. The system consists of 60 Light-emitting device (LED) sources at three wavelengths of 660nm, 780nm and 830nm, which are modulated by current-stabilized square-wave signals at different frequencies, and 12 photomultiplier tubes (PMT) based on lock-in photon-counting technique. This design combines the ultra-high sensitivity of the photon-counting technique with the parallelism of the digital lock-in technique. We can therefore acquire the diffused light intensity for all the source-detector pairs (SD-pairs) in parallel. The performance assessments of the system are conducted using phantom experiments, and demonstrate its excellent measurement linearity, negligible inter-channel crosstalk, strong noise robustness and high temporal resolution.

  1. Explaining Match Outcome During The Men’s Basketball Tournament at The Olympic Games

    PubMed Central

    Leicht, Anthony S.; Gómez, Miguel A.; Woods, Carl T.

    2017-01-01

    In preparation for the Olympics, there is a limited opportunity for coaches and athletes to interact regularly with team performance indicators providing important guidance to coaches for enhanced match success at the elite level. This study examined the relationship between match outcome and team performance indicators during men’s basketball tournaments at the Olympic Games. Twelve team performance indicators were collated from all men’s teams and matches during the basketball tournament of the 2004-2016 Olympic Games (n = 156). Linear and non-linear analyses examined the relationship between match outcome and team performance indicator characteristics; namely, binary logistic regression and a conditional interference (CI) classification tree. The most parsimonious logistic regression model retained ‘assists’, ‘defensive rebounds’, ‘field-goal percentage’, ‘fouls’, ‘fouls against’, ‘steals’ and ‘turnovers’ (delta AIC <0.01; Akaike weight = 0.28) with a classification accuracy of 85.5%. Conversely, four performance indicators were retained with the CI classification tree with an average classification accuracy of 81.4%. However, it was the combination of ‘field-goal percentage’ and ‘defensive rebounds’ that provided the greatest probability of winning (93.2%). Match outcome during the men’s basketball tournaments at the Olympic Games was identified by a unique combination of performance indicators. Despite the average model accuracy being marginally higher for the logistic regression analysis, the CI classification tree offered a greater practical utility for coaches through its resolution of non-linear phenomena to guide team success. Key points A unique combination of team performance indicators explained 93.2% of winning observations in men’s basketball at the Olympics. Monitoring of these team performance indicators may provide coaches with the capability to devise multiple game plans or strategies to enhance their likelihood of winning. Incorporation of machine learning techniques with team performance indicators may provide a valuable and strategic approach to explain patterns within multivariate datasets in sport science. PMID:29238245

  2. Simple pre-distortion schemes for improving the power efficiency of SOA-based IR-UWB over fiber systems

    NASA Astrophysics Data System (ADS)

    Taki, H.; Azou, S.; Hamie, A.; Al Housseini, A.; Alaeddine, A.; Sharaiha, A.

    2017-01-01

    In this paper, we investigate the usage of SOA for reach extension of an impulse radio over fiber system. Operating in the saturated regime translates into strong nonlinearities and spectral distortions, which drops the power efficiency of the propagated pulses. After studying the SOA response versus operating conditions, we have enhanced the system performance by applying simple analog pre-distortion schemes for various derivatives of the Gaussian pulse and their combination. A novel pulse shape has also been designed by linearly combining three basic Gaussian pulses, offering a very good spectral efficiency (> 55 %) for a high power (0 dBm) at the amplifier input. Furthermore, the potential of our technique has been examined considering a 1.5 Gbps-OOK and 0.75 Gbps-PPM modulation schemes. Pre-distortion proved an advantage for a large extension of optical link (150 km), with an inline amplification via SOA at 40 km.

  3. Propagation characteristics of two-color laser pulses in homogeneous plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemlata,; Saroch, Akanksha; Jha, Pallavi

    2015-11-15

    An analytical and numerical study of the evolution of two-color, sinusoidal laser pulses in cold, underdense, and homogeneous plasma has been presented. The wave equations for the radiation fields driven by linear as well as nonlinear contributions due to the two-color laser pulses have been set up. A variational technique is used to obtain the simultaneous equations describing the evolution of the laser spot size, pulse length, and chirp parameter. Numerical methods are used to graphically analyze the simultaneous evolution of these parameters due to the combined effect of the two-color laser pulses. Further, the pulse parameters are compared withmore » those obtained for a single laser pulse. Significant focusing, compression, and enhanced positive chirp is obtained due to the combined effect of simultaneously propagating two-color pulses as compared to a single pulse propagating in plasma.« less

  4. A new approach in space-time analysis of multivariate hydrological data: Application to Brazil's Nordeste region rainfall

    NASA Astrophysics Data System (ADS)

    Sicard, Emeline; Sabatier, Robert; Niel, HéLèNe; Cadier, Eric

    2002-12-01

    The objective of this paper is to implement an original method for spatial and multivariate data, combining a method of three-way array analysis (STATIS) with geostatistical tools. The variables of interest are the monthly amounts of rainfall in the Nordeste region of Brazil, recorded from 1937 to 1975. The principle of the technique is the calculation of a linear combination of the initial variables, containing a large part of the initial variability and taking into account the spatial dependencies. It is a promising method that is able to analyze triple variability: spatial, seasonal, and interannual. In our case, the first component obtained discriminates a group of rain gauges, corresponding approximately to the Agreste, from all the others. The monthly variables of July and August strongly influence this separation. Furthermore, an annual study brings out the stability of the spatial structure of components calculated for each year.

  5. Enhanced robust finite-time passivity for Markovian jumping discrete-time BAM neural networks with leakage delay.

    PubMed

    Sowmiya, C; Raja, R; Cao, Jinde; Rajchakit, G; Alsaedi, Ahmed

    2017-01-01

    This paper is concerned with the problem of enhanced results on robust finite-time passivity for uncertain discrete-time Markovian jumping BAM delayed neural networks with leakage delay. By implementing a proper Lyapunov-Krasovskii functional candidate, the reciprocally convex combination method together with linear matrix inequality technique, several sufficient conditions are derived for varying the passivity of discrete-time BAM neural networks. An important feature presented in our paper is that we utilize the reciprocally convex combination lemma in the main section and the relevance of that lemma arises from the derivation of stability by using Jensen's inequality. Further, the zero inequalities help to propose the sufficient conditions for finite-time boundedness and passivity for uncertainties. Finally, the enhancement of the feasible region of the proposed criteria is shown via numerical examples with simulation to illustrate the applicability and usefulness of the proposed method.

  6. Computational inhibitor design against malaria plasmepsins.

    PubMed

    Bjelic, S; Nervall, M; Gutiérrez-de-Terán, H; Ersmark, K; Hallberg, A; Aqvist, J

    2007-09-01

    Plasmepsins are aspartic proteases involved in the degradation of the host cell hemoglobin that is used as a food source by the malaria parasite. Plasmepsins are highly promising as drug targets, especially when combined with the inhibition of falcipains that are also involved in hemoglobin catabolism. In this review, we discuss the mechanism of plasmepsins I-IV in view of the interest in transition state mimetics as potential compounds for lead development. Inhibitor development against plasmepsin II as well as relevant crystal structures are summarized in order to give an overview of the field. Application of computational techniques, especially binding affinity prediction by the linear interaction energy method, in the development of malarial plasmepsin inhibitors has been highly successful and is discussed in detail. Homology modeling and molecular docking have been useful in the current inhibitor design project, and the combination of such methods with binding free energy calculations is analyzed.

  7. Automated segmentation of ventricles from serial brain MRI for the quantification of volumetric changes associated with communicating hydrocephalus in patients with brain tumor

    NASA Astrophysics Data System (ADS)

    Pura, John A.; Hamilton, Allison M.; Vargish, Geoffrey A.; Butman, John A.; Linguraru, Marius George

    2011-03-01

    Accurate ventricle volume estimates could improve the understanding and diagnosis of postoperative communicating hydrocephalus. For this category of patients, associated changes in ventricle volume can be difficult to identify, particularly over short time intervals. We present an automated segmentation algorithm that evaluates ventricle size from serial brain MRI examination. The technique combines serial T1- weighted images to increase SNR and segments the means image to generate a ventricle template. After pre-processing, the segmentation is initiated by a fuzzy c-means clustering algorithm to find the seeds used in a combination of fast marching methods and geodesic active contours. Finally, the ventricle template is propagated onto the serial data via non-linear registration. Serial volume estimates were obtained in an automated robust and accurate manner from difficult data.

  8. Comparison of five modelling techniques to predict the spatial distribution and abundance of seabirds

    USGS Publications Warehouse

    O'Connell, Allan F.; Gardner, Beth; Oppel, Steffen; Meirinho, Ana; Ramírez, Iván; Miller, Peter I.; Louzao, Maite

    2012-01-01

    Knowledge about the spatial distribution of seabirds at sea is important for conservation. During marine conservation planning, logistical constraints preclude seabird surveys covering the complete area of interest and spatial distribution of seabirds is frequently inferred from predictive statistical models. Increasingly complex models are available to relate the distribution and abundance of pelagic seabirds to environmental variables, but a comparison of their usefulness for delineating protected areas for seabirds is lacking. Here we compare the performance of five modelling techniques (generalised linear models, generalised additive models, Random Forest, boosted regression trees, and maximum entropy) to predict the distribution of Balearic Shearwaters (Puffinus mauretanicus) along the coast of the western Iberian Peninsula. We used ship transect data from 2004 to 2009 and 13 environmental variables to predict occurrence and density, and evaluated predictive performance of all models using spatially segregated test data. Predicted distribution varied among the different models, although predictive performance varied little. An ensemble prediction that combined results from all five techniques was robust and confirmed the existence of marine important bird areas for Balearic Shearwaters in Portugal and Spain. Our predictions suggested additional areas that would be of high priority for conservation and could be proposed as protected areas. Abundance data were extremely difficult to predict, and none of five modelling techniques provided a reliable prediction of spatial patterns. We advocate the use of ensemble modelling that combines the output of several methods to predict the spatial distribution of seabirds, and use these predictions to target separate surveys assessing the abundance of seabirds in areas of regular use.

  9. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  10. Prediction of Undsteady Flows in Turbomachinery Using the Linearized Euler Equations on Deforming Grids

    NASA Technical Reports Server (NTRS)

    Clark, William S.; Hall, Kenneth C.

    1994-01-01

    A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.

  11. Lessons learned from the SLC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phinney, N.

    The SLAC Linear Collider (SLC) is the first example of an entirely new type of lepton collider. Many years of effort were required to develop the understanding and techniques needed to approach design luminosity. This paper discusses some of the key issues and problems encountered in producing a working linear collider. These include the polarized source, techniques for emittance preservation, extensive feedback systems, and refinements in beam optimization in the final focus. The SLC experience has been invaluable for testing concepts and developing designs for a future linear collider.

  12. An efficient finite element technique for sound propagation in axisymmetric hard wall ducts carrying high subsonic Mach number flows

    NASA Technical Reports Server (NTRS)

    Tag, I. A.; Lumsdaine, E.

    1978-01-01

    The general non-linear three-dimensional equation for acoustic potential is derived by using a perturbation technique. The linearized axisymmetric equation is then solved by using a finite element algorithm based on the Galerkin formulation for a harmonic time dependence. The solution is carried out in complex number notation for the acoustic velocity potential. Linear, isoparametric, quadrilateral elements with non-uniform distribution across the duct section are implemented. The resultant global matrix is stored in banded form and solved by using a modified Gauss elimination technique. Sound pressure levels and acoustic velocities are calculated from post element solutions. Different duct geometries are analyzed and compared with experimental results.

  13. A technique using a nonlinear helicopter model for determining trims and derivatives

    NASA Technical Reports Server (NTRS)

    Ostroff, A. J.; Downing, D. R.; Rood, W. J.

    1976-01-01

    A technique is described for determining the trims and quasi-static derivatives of a flight vehicle for use in a linear perturbation model; both the coupled and uncoupled forms of the linear perturbation model are included. Since this technique requires a nonlinear vehicle model, detailed equations with constants and nonlinear functions for the CH-47B tandem rotor helicopter are presented. Tables of trims and derivatives are included for airspeeds between -40 and 160 knots and rates of descent between + or - 10.16 m/sec (+ or - 200 ft/min). As a verification, the calculated and referenced values of comparable trims, derivatives, and linear model poles are shown to have acceptable agreement.

  14. A game theoretic approach to a finite-time disturbance attenuation problem

    NASA Technical Reports Server (NTRS)

    Rhee, Ihnseok; Speyer, Jason L.

    1991-01-01

    A disturbance attenuation problem over a finite-time interval is considered by a game theoretic approach where the control, restricted to a function of the measurement history, plays against adversaries composed of the process and measurement disturbances, and the initial state. A zero-sum game, formulated as a quadratic cost criterion subject to linear time-varying dynamics and measurements, is solved by a calculus of variation technique. By first maximizing the quadratic cost criterion with respect to the process disturbance and initial state, a full information game between the control and the measurement residual subject to the estimator dynamics results. The resulting solution produces an n-dimensional compensator which expresses the controller as a linear combination of the measurement history. A disturbance attenuation problem is solved based on the results of the game problem. For time-invariant systems it is shown that under certain conditions the time-varying controller becomes time-invariant on the infinite-time interval. The resulting controller satisfies an H(infinity) norm bound.

  15. Nonlinear Control of a Reusable Rocket Engine for Life Extension

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Holmes, Michael S.; Ray, Asok

    1998-01-01

    This paper presents the conceptual development of a life-extending control system where the objective is to achieve high performance and structural durability of the plant. A life-extending controller is designed for a reusable rocket engine via damage mitigation in both the fuel (H2) and oxidizer (O2) turbines while achieving high performance for transient responses of the combustion chamber pressure and the O2/H2 mixture ratio. The design procedure makes use of a combination of linear and nonlinear controller synthesis techniques and also allows adaptation of the life-extending controller module to augment a conventional performance controller of the rocket engine. The nonlinear aspect of the design is achieved using non-linear parameter optimization of a prescribed control structure. Fatigue damage in fuel and oxidizer turbine blades is primarily caused by stress cycling during start-up, shutdown, and transient operations of a rocket engine. Fatigue damage in the turbine blades is one of the most serious causes for engine failure.

  16. A 0.1-1.4 GHz inductorless low-noise amplifier with 13 dBm IIP3 and 24 dBm IIP2 in 180 nm CMOS

    NASA Astrophysics Data System (ADS)

    Guo, Benqing; Chen, Jun; Chen, Hongpeng; Wang, Xuebing

    2018-01-01

    An inductorless noise-canceling CMOS low-noise amplifier (LNA) with wideband linearization technique is proposed. The complementary configuration by stacked NMOS/PMOS is employed to compensate second-order nonlinearity of the circuit. The third-order distortion of the auxiliary stage is also mitigated by that of the weak inversion transistors in the main path. The bias and scaling size combined by digital control words are further tuned to obtain enhanced linearity over the desired band. Implemented in a 0.18 μm CMOS process, simulated results show that the proposed LNA provides a voltage gain of 16.1 dB and a NF of 2.8-3.4 dB from 0.1 GHz to 1.4 GHz. The IIP3 and IIP2 of 13-18.9 and 24-40 dBm are obtained, respectively. The circuit core consumes 19 mW from a 1.8 V supply.

  17. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE PAGES

    Fierce, Laura; McGraw, Robert L.

    2017-07-26

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  18. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fierce, Laura; McGraw, Robert L.

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  19. The design and analysis of simple low speed flap systems with the aid of linearized theory computer programs

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.

    1985-01-01

    The purpose here is to show how two linearized theory computer programs in combination may be used for the design of low speed wing flap systems capable of high levels of aerodynamic efficiency. A fundamental premise of the study is that high levels of aerodynamic performance for flap systems can be achieved only if the flow about the wing remains predominantly attached. Based on this premise, a wing design program is used to provide idealized attached flow camber surfaces from which candidate flap systems may be derived, and, in a following step, a wing evaluation program is used to provide estimates of the aerodynamic performance of the candidate systems. Design strategies and techniques that may be employed are illustrated through a series of examples. Applicability of the numerical methods to the analysis of a representative flap system (although not a system designed by the process described here) is demonstrated in a comparison with experimental data.

  20. Reference governors for controlled belt restraint systems

    NASA Astrophysics Data System (ADS)

    van der Laan, E. P.; Heemels, W. P. M. H.; Luijten, H.; Veldpaus, F. E.; Steinbuch, M.

    2010-07-01

    Today's restraint systems typically include a number of airbags, and a three-point seat belt with load limiter and pretensioner. For the class of real-time controlled restraint systems, the restraint actuator settings are continuously manipulated during the crash. This paper presents a novel control strategy for these systems. The control strategy developed here is based on a combination of model predictive control and reference management, in which a non-linear device - a reference governor (RG) - is added to a primal closed-loop controlled system. This RG determines an optimal setpoint in terms of injury reduction and constraint satisfaction by solving a constrained optimisation problem. Prediction of the vehicle motion, required to predict future constraint violation, is included in the design and is based on past crash data, using linear regression techniques. Simulation results with MADYMO models show that, with ideal sensors and actuators, a significant reduction (45%) of the peak chest acceleration can be achieved, without prior knowledge of the crash. Furthermore, it is shown that the algorithms are sufficiently fast to be implemented online.

  1. Impulsive stabilization and impulsive synchronization of discrete-time delayed neural networks.

    PubMed

    Chen, Wu-Hua; Lu, Xiaomei; Zheng, Wei Xing

    2015-04-01

    This paper investigates the problems of impulsive stabilization and impulsive synchronization of discrete-time delayed neural networks (DDNNs). Two types of DDNNs with stabilizing impulses are studied. By introducing the time-varying Lyapunov functional to capture the dynamical characteristics of discrete-time impulsive delayed neural networks (DIDNNs) and by using a convex combination technique, new exponential stability criteria are derived in terms of linear matrix inequalities. The stability criteria for DIDNNs are independent of the size of time delay but rely on the lengths of impulsive intervals. With the newly obtained stability results, sufficient conditions on the existence of linear-state feedback impulsive controllers are derived. Moreover, a novel impulsive synchronization scheme for two identical DDNNs is proposed. The novel impulsive synchronization scheme allows synchronizing two identical DDNNs with unknown delays. Simulation results are given to validate the effectiveness of the proposed criteria of impulsive stabilization and impulsive synchronization of DDNNs. Finally, an application of the obtained impulsive synchronization result for two identical chaotic DDNNs to a secure communication scheme is presented.

  2. Streamflow record extension using power transformations and application to sediment transport

    NASA Astrophysics Data System (ADS)

    Moog, Douglas B.; Whiting, Peter J.; Thomas, Robert B.

    1999-01-01

    To obtain a representative set of flow rates for a stream, it is often desirable to fill in missing data or extend measurements to a longer time period by correlation to a nearby gage with a longer record. Linear least squares regression of the logarithms of the flows is a traditional and still common technique. However, its purpose is to generate optimal estimates of each day's discharge, rather than the population of discharges, for which it tends to underestimate variance. Maintenance-of-variance-extension (MOVE) equations [Hirsch, 1982] were developed to correct this bias. This study replaces the logarithmic transformation by the more general Box-Cox scaled power transformation, generating a more linear, constant-variance relationship for the MOVE extension. Combining the Box-Cox transformation with the MOVE extension is shown to improve accuracy in estimating order statistics of flow rate, particularly for the nonextreme discharges which generally govern cumulative transport over time. This advantage is illustrated by prediction of cumulative fractions of total bed load transport.

  3. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    PubMed

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  4. Design and test of three active flutter suppression controllers

    NASA Technical Reports Server (NTRS)

    Christhilf, David M.; Waszak, Martin R.; Adams, William M.; Srinathkumar, S.; Mukhopadhyay, Vivek

    1991-01-01

    Three flutter suppression control law design techniques are presented. Each uses multiple control surfaces and/or sensors. The first uses linear combinations of several accelerometer signals together with dynamic compensation to synthesize the modal rate of the critical mode for feedback to distributed control surfaces. The second uses traditional tools (pole/zero loci and Nyquist diagrams) to develop a good understanding of the flutter mechanism and produce a controller with minimal complexity and good robustness to plant uncertainty. The third starts with a minimum energy Linear Quadratic Gaussian controller, applies controller order reduction, and then modifies weight and noise covariance matrices to improve multi-variable robustness. The resulting designs were implemented digitally and tested subsonically on the Active Flexible Wing (AFW) wind tunnel model. Test results presented here include plant characteristics, maximum attained closed-loop dynamic pressure, and Root Mean Square control surface activity. A key result is that simultaneous symmetric and antisymmetric flutter suppression was achieved by the second control law, with a 24 percent increase in attainable dynamic pressure.

  5. Robust Head-Pose Estimation Based on Partially-Latent Mixture of Linear Regressions.

    PubMed

    Drouard, Vincent; Horaud, Radu; Deleforge, Antoine; Ba, Sileye; Evangelidis, Georgios

    2017-03-01

    Head-pose estimation has many applications, such as social event analysis, human-robot and human-computer interaction, driving assistance, and so forth. Head-pose estimation is challenging, because it must cope with changing illumination conditions, variabilities in face orientation and in appearance, partial occlusions of facial landmarks, as well as bounding-box-to-face alignment errors. We propose to use a mixture of linear regressions with partially-latent output. This regression method learns to map high-dimensional feature vectors (extracted from bounding boxes of faces) onto the joint space of head-pose angles and bounding-box shifts, such that they are robustly predicted in the presence of unobservable phenomena. We describe in detail the mapping method that combines the merits of unsupervised manifold learning techniques and of mixtures of regressions. We validate our method with three publicly available data sets and we thoroughly benchmark four variants of the proposed algorithm with several state-of-the-art head-pose estimation methods.

  6. Fault-tolerant optimised tracking control for unknown discrete-time linear systems using a combined reinforcement learning and residual compensation methodology

    NASA Astrophysics Data System (ADS)

    Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong

    2017-10-01

    This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.

  7. Second Law of Thermodynamics Applied to Metabolic Networks

    NASA Technical Reports Server (NTRS)

    Nigam, R.; Liang, S.

    2003-01-01

    We present a simple algorithm based on linear programming, that combines Kirchoff's flux and potential laws and applies them to metabolic networks to predict thermodynamically feasible reaction fluxes. These law's represent mass conservation and energy feasibility that are widely used in electrical circuit analysis. Formulating the Kirchoff's potential law around a reaction loop in terms of the null space of the stoichiometric matrix leads to a simple representation of the law of entropy that can be readily incorporated into the traditional flux balance analysis without resorting to non-linear optimization. Our technique is new as it can easily check the fluxes got by applying flux balance analysis for thermodynamic feasibility and modify them if they are infeasible so that they satisfy the law of entropy. We illustrate our method by applying it to the network dealing with the central metabolism of Escherichia coli. Due to its simplicity this algorithm will be useful in studying large scale complex metabolic networks in the cell of different organisms.

  8. The Analysis and Construction of Perfectly Matched Layers for the Linearized Euler Equations

    NASA Technical Reports Server (NTRS)

    Hesthaven, J. S.

    1997-01-01

    We present a detailed analysis of a recently proposed perfectly matched layer (PML) method for the absorption of acoustic waves. The split set of equations is shown to be only weakly well-posed, and ill-posed under small low order perturbations. This analysis provides the explanation for the stability problems associated with the split field formulation and illustrates why applying a filter has a stabilizing effect. Utilizing recent results obtained within the context of electromagnetics, we develop strongly well-posed absorbing layers for the linearized Euler equations. The schemes are shown to be perfectly absorbing independent of frequency and angle of incidence of the wave in the case of a non-convecting mean flow. In the general case of a convecting mean flow, a number of techniques is combined to obtain a absorbing layers exhibiting PML-like behavior. The efficacy of the proposed absorbing layers is illustrated though computation of benchmark problems in aero-acoustics.

  9. Novel and general approach to linear filter design for contrast-to-noise ratio enhancement of magnetic resonance images with multiple interfering features in the scene

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Windham, Joe P.

    1992-04-01

    Maximizing the minimum absolute contrast-to-noise ratios (CNRs) between a desired feature and multiple interfering processes, by linear combination of images in a magnetic resonance imaging (MRI) scene sequence, is attractive for MRI analysis and interpretation. A general formulation of the problem is presented, along with a novel solution utilizing the simple and numerically stable method of Gram-Schmidt orthogonalization. We derive explicit solutions for the case of two interfering features first, then for three interfering features, and, finally, using a typical example, for an arbitrary number of interfering feature. For the case of two interfering features, we also provide simplified analytical expressions for the signal-to-noise ratios (SNRs) and CNRs of the filtered images. The technique is demonstrated through its applications to simulated and acquired MRI scene sequences of a human brain with a cerebral infarction. For these applications, a 50 to 100% improvement for the smallest absolute CNR is obtained.

  10. Bioremediation of surface water co-contaminated with zinc (II) and linear alkylbenzene sulfonates by Spirulina platensis

    NASA Astrophysics Data System (ADS)

    Meng, Huijuan; Xia, Yunfeng; Chen, Hong

    Potential remediation of surface water contaminated with linear alkylbenzene sulfonates (LAS) and zinc (Zn (II)) by sorption on Spirulina platensis was studied using batch techniques. Results show that LAS can be biodegraded by Spirulina platensis, and its biodegradation rate after 5 days was 87%, 80%, and 70.5% when its initial concentration was 0.5, 1, and 2 mg/L, respectively. The maximum Zn (II) uptake capacity of Spirulina platensis was found to be 30.96 mg/g. LAS may enhance the maximum Zn (II) uptake capacity of Spirulina platensis, which can be attributed to an increase in bioavailability due to the presence of LAS. The biodegradation rates of LAS by Spirulina platensis increased with Zn (II) and reached the maximum when Zn (II) was 4 mg/L. The joint toxicity test showed that the combined effect of LAS and Zn (II) was Synergistic. LAS can enhance the biosorption of Zn (II), and reciprocally, Zn (II) can enhance LAS biodegradation.

  11. A proof-of-concept study on the combination of repetitive transcranial magnetic stimulation and relaxation techniques in chronic tinnitus.

    PubMed

    Kreuzer, Peter M; Poeppl, Timm B; Bulla, Jan; Schlee, Winfried; Lehner, Astrid; Langguth, Berthold; Schecklmann, Martin

    2016-10-01

    Interference of ongoing neuronal activity and brain stimulation motivated this study to combine repetitive transcranial magnetic stimulation (rTMS) and relaxation techniques in tinnitus patients. Forty-two patients were enrolled in this one-arm proof-of-concept study to receive ten sessions of rTMS applied to the left dorsolateral prefrontal cortex and temporo-parietal cortex. During stimulation, patients listened to five different kinds of relaxation audios. Variables of interest were tinnitus questionnaires, tinnitus numeric rating scales, depressivity, and quality of life. Results were compared to results of historical control groups having received the same rTMS protocol (active control) and sham treatment (placebo) without relaxation techniques. Thirty-eight patients completed the treatment, drop-out rates and adverse events were low. Responder rates (reduction in tinnitus questionnaire (TQ) score ≥5 points 10 weeks after treatment) were 44.7 % in the study, 27.8 % in the active control group, and 21.7 % in the placebo group, differing between groups on a near significant level. For the tinnitus handicap inventory (THI), the main effect of group was not significant. However, linear mixed model analyses showed that the relaxation/rTMS group differed significantly from the active control group showing steeper negative THI trend for the relaxation/rTMS group indicating better amelioration over the course of the trial. Deepness of relaxation during rTMS and selection of active relaxation vs. passive listening to music predicted larger TQ. All remaining secondary outcomes turned out non-significant. This combined treatment proved to be a safe, feasible and promising approach to enhance rTMS treatment effects in chronic tinnitus.

  12. The Spontaneous Ray Log: A New Aid for Constructing Pseudo-Synthetic Seismograms

    NASA Astrophysics Data System (ADS)

    Quadir, Adnan; Lewis, Charles; Rau, Ruey-Juin

    2018-02-01

    Conventional synthetic seismograms for hydrocarbon exploration combine the sonic and density logs, whereas pseudo-synthetic seismograms are constructed with a density log plus a resistivity, neutron, gamma ray, or rarely a spontaneous potential log. Herein, we introduce a new technique for constructing a pseudo-synthetic seismogram by combining the gamma ray (GR) and self-potential (SP) logs to produce the spontaneous ray (SR) log. Three wells, each of which consisted of more than 1000 m of carbonates, sandstones, and shales, were investigated; each well was divided into 12 Groups based on formation tops, and the Pearson product-moment correlation coefficient (PCC) was calculated for each "Group" from each of the GR, SP, and SR logs. The highest PCC-valued log curves for each Group were then combined to produce a single log whose values were cross-plotted against the reference well's sonic ITT values to determine a linear transform for producing a pseudo-sonic (PS) log and, ultimately, a pseudo-synthetic seismogram. The range for the Nash-Sutcliffe efficiency (NSE) acceptable value for the pseudo-sonic logs of three wells was 78-83%. This technique was tested on three wells, one of which was used as a blind test well, with satisfactory results. The PCC value between the composite PS (SR) log with low-density correction and the conventional sonic (CS) log was 86%. Because of the common occurrence of spontaneous potential and gamma ray logs in many of the hydrocarbon basins of the world, this inexpensive and straightforward technique could hold significant promise in areas that are in need of alternate ways to create pseudo-synthetic seismograms for seismic reflection interpretation.

  13. Variable selection in near-infrared spectroscopy: benchmarking of feature selection methods on biodiesel data.

    PubMed

    Balabin, Roman M; Smirnov, Sergey V

    2011-04-29

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Polynomial elimination theory and non-linear stability analysis for the Euler equations

    NASA Technical Reports Server (NTRS)

    Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.

    1986-01-01

    Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.

  15. Optimal GENCO bidding strategy

    NASA Astrophysics Data System (ADS)

    Gao, Feng

    Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.

  16. Damage assessment in reinforced concrete using nonlinear vibration techniques

    NASA Astrophysics Data System (ADS)

    Van Den Abeele, K.; De Visscher, J.

    2000-07-01

    Reinforced concrete (RC) structures are subject to microcrack initiation and propagation at load levels far below the actual failure load. In this paper, nonlinear vibration techniques are applied to investigate stages of progressive damage in RC beams induced by static loading tests. At different levels of damage, a modal analysis is carried out, assuming the structure to behave linearly. At the same time, measurement of resonant frequencies and damping ratios as function of vibration amplitude are performed using a frequency domain technique as well as a time domain technique. We compare the results of the linear and nonlinear techniques, and value them against the visual damage evaluation.

  17. Theory of chromatic noise masking applied to testing linearity of S-cone detection mechanisms.

    PubMed

    Giulianini, Franco; Eskew, Rhea T

    2007-09-01

    A method for testing the linearity of cone combination of chromatic detection mechanisms is applied to S-cone detection. This approach uses the concept of mechanism noise, the noise as seen by a postreceptoral neural mechanism, to represent the effects of superposing chromatic noise components in elevating thresholds and leads to a parameter-free prediction for a linear mechanism. The method also provides a test for the presence of multiple linear detectors and off-axis looking. No evidence for multiple linear mechanisms was found when using either S-cone increment or decrement tests. The results for both S-cone test polarities demonstrate that these mechanisms combine their cone inputs nonlinearly.

  18. Prediction of pork quality parameters by applying fractals and data mining on MRI.

    PubMed

    Caballero, Daniel; Pérez-Palacios, Trinidad; Caro, Andrés; Amigo, José Manuel; Dahl, Anders B; ErsbØll, Bjarne K; Antequera, Teresa

    2017-09-01

    This work firstly investigates the use of MRI, fractal algorithms and data mining techniques to determine pork quality parameters non-destructively. The main objective was to evaluate the capability of fractal algorithms (Classical Fractal algorithm, CFA; Fractal Texture Algorithm, FTA and One Point Fractal Texture Algorithm, OPFTA) to analyse MRI in order to predict quality parameters of loin. In addition, the effect of the sequence acquisition of MRI (Gradient echo, GE; Spin echo, SE and Turbo 3D, T3D) and the predictive technique of data mining (Isotonic regression, IR and Multiple linear regression, MLR) were analysed. Both fractal algorithm, FTA and OPFTA are appropriate to analyse MRI of loins. The sequence acquisition, the fractal algorithm and the data mining technique seems to influence on the prediction results. For most physico-chemical parameters, prediction equations with moderate to excellent correlation coefficients were achieved by using the following combinations of acquisition sequences of MRI, fractal algorithms and data mining techniques: SE-FTA-MLR, SE-OPFTA-IR, GE-OPFTA-MLR, SE-OPFTA-MLR, with the last one offering the best prediction results. Thus, SE-OPFTA-MLR could be proposed as an alternative technique to determine physico-chemical traits of fresh and dry-cured loins in a non-destructive way with high accuracy. Copyright © 2017. Published by Elsevier Ltd.

  19. Maximally reliable spatial filtering of steady state visual evoked potentials.

    PubMed

    Dmochowski, Jacek P; Greaves, Alex S; Norcia, Anthony M

    2015-04-01

    Due to their high signal-to-noise ratio (SNR) and robustness to artifacts, steady state visual evoked potentials (SSVEPs) are a popular technique for studying neural processing in the human visual system. SSVEPs are conventionally analyzed at individual electrodes or linear combinations of electrodes which maximize some variant of the SNR. Here we exploit the fundamental assumption of evoked responses--reproducibility across trials--to develop a technique that extracts a small number of high SNR, maximally reliable SSVEP components. This novel spatial filtering method operates on an array of Fourier coefficients and projects the data into a low-dimensional space in which the trial-to-trial spectral covariance is maximized. When applied to two sample data sets, the resulting technique recovers physiologically plausible components (i.e., the recovered topographies match the lead fields of the underlying sources) while drastically reducing the dimensionality of the data (i.e., more than 90% of the trial-to-trial reliability is captured in the first four components). Moreover, the proposed technique achieves a higher SNR than that of the single-best electrode or the Principal Components. We provide a freely-available MATLAB implementation of the proposed technique, herein termed "Reliable Components Analysis". Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Techniques of stapler-based navigational thoracoscopic segmentectomy using virtual assisted lung mapping (VAL-MAP)

    PubMed Central

    Murayama, Tomonori; Nakajima, Jun

    2016-01-01

    Anatomical segmentectomies play an important role in oncological lung resection, particularly for ground-glass types of primary lung cancers. This operation can also be applied to metastatic lung tumors deep in the lung. Virtual assisted lung mapping (VAL-MAP) is a novel technique that allows for bronchoscopic multi-spot dye markings to provide “geometric information” to the lung surface, using three-dimensional virtual images. In addition to wedge resections, VAL-MAP has been found to be useful in thoracoscopic segmentectomies, particularly complex segmentectomies, such as combined subsegmentectomies or extended segmentectomies. There are five steps in VAL-MAP-assisted segmentectomies: (I) “standing” stitches along the resection lines; (II) cleaning hilar anatomy; (III) confirming hilar anatomy; (IV) going 1 cm deeper; (V) step-by-step stapling technique. Depending on the anatomy, segmentectomies can be classified into linear (lingular, S6, S2), V- or U-shaped (right S1, left S3, S2b + S3a), and three dimensional (S7, S8, S9, S10) segmentectomies. Particularly three dimensional segmentectomies are challenging in the complexity of stapling techniques. This review focuses on how VAL-MAP can be utilized in segmentectomy, and how this technique can assist the stapling process in even the most challenging ones. PMID:28066675

Top