Sample records for method showed linearity

  1. Pseudo-second order models for the adsorption of safranin onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth

    2007-04-02

    Kinetic experiments were carried out for the sorption of safranin onto activated carbon particles. The kinetic data were fitted to pseudo-second order model of Ho, Sobkowsk and Czerwinski, Blanchard et al. and Ritchie by linear and non-linear regression methods. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo-second order models were the same. Non-linear regression analysis showed that both Blanchard et al. and Ho have similar ideas on the pseudo-second order model but with different assumptions. The best fit of experimental data in Ho's pseudo-second order expression by linear and non-linear regression method showed that Ho pseudo-second order model was a better kinetic expression when compared to other pseudo-second order kinetic expressions.

  2. Isotherm investigation for the sorption of fluoride onto Bio-F: comparison of linear and non-linear regression method

    NASA Astrophysics Data System (ADS)

    Yadav, Manish; Singh, Nitin Kumar

    2017-12-01

    A comparison of the linear and non-linear regression method in selecting the optimum isotherm among three most commonly used adsorption isotherms (Langmuir, Freundlich, and Redlich-Peterson) was made to the experimental data of fluoride (F) sorption onto Bio-F at a solution temperature of 30 ± 1 °C. The coefficient of correlation (r2) was used to select the best theoretical isotherm among the investigated ones. A total of four Langmuir linear equations were discussed and out of which linear form of most popular Langmuir-1 and Langmuir-2 showed the higher coefficient of determination (0.976 and 0.989) as compared to other Langmuir linear equations. Freundlich and Redlich-Peterson isotherms showed a better fit to the experimental data in linear least-square method, while in non-linear method Redlich-Peterson isotherm equations showed the best fit to the tested data set. The present study showed that the non-linear method could be a better way to obtain the isotherm parameters and represent the most suitable isotherm. Redlich-Peterson isotherm was found to be the best representative (r2 = 0.999) for this sorption system. It is also observed that the values of β are not close to unity, which means the isotherms are approaching the Freundlich but not the Langmuir isotherm.

  3. Second-order kinetic model for the sorption of cadmium onto tree fern: a comparison of linear and non-linear methods.

    PubMed

    Ho, Yuh-Shan

    2006-01-01

    A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.

  4. Pseudo second order kinetics and pseudo isotherms for malachite green onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2006-08-25

    Pseudo second order kinetic expressions of Ho, Sobkowsk and Czerwinski, Blanachard et al. and Ritchie were fitted to the experimental kinetic data of malachite green onto activated carbon by non-linear and linear method. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo second order model were the same. Non-linear regression analysis showed that both Blanachard et al. and Ho have similar ideas on the pseudo second order model but with different assumptions. The best fit of experimental data in Ho's pseudo second order expression by linear and non-linear regression method showed that Ho pseudo second order model was a better kinetic expression when compared to other pseudo second order kinetic expressions. The amount of dye adsorbed at equilibrium, q(e), was predicted from Ho pseudo second order expression and were fitted to the Langmuir, Freundlich and Redlich Peterson expressions by both linear and non-linear method to obtain the pseudo isotherms. The best fitting pseudo isotherm was found to be the Langmuir and Redlich Peterson isotherm. Redlich Peterson is a special case of Langmuir when the constant g equals unity.

  5. Comparison of linear and non-linear method in estimating the sorption isotherm parameters for safranin onto activated carbon.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2005-08-31

    Comparison analysis of linear least square method and non-linear method for estimating the isotherm parameters was made using the experimental equilibrium data of safranin onto activated carbon at two different solution temperatures 305 and 313 K. Equilibrium data were fitted to Freundlich, Langmuir and Redlich-Peterson isotherm equations. All the three isotherm equations showed a better fit to the experimental equilibrium data. The results showed that non-linear method could be a better way to obtain the isotherm parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.

  6. Prediction of optimum sorption isotherm: comparison of linear and non-linear method.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2005-11-11

    Equilibrium parameters for Bismarck brown onto rice husk were estimated by linear least square and a trial and error non-linear method using Freundlich, Langmuir and Redlich-Peterson isotherms. A comparison between linear and non-linear method of estimating the isotherm parameters was reported. The best fitting isotherm was Langmuir isotherm and Redlich-Peterson isotherm equation. The results show that non-linear method could be a better way to obtain the parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.

  7. Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; Lin, Lin; Shao, Meiyue

    We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate themore » density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.« less

  8. A feasible DY conjugate gradient method for linear equality constraints

    NASA Astrophysics Data System (ADS)

    LI, Can

    2017-09-01

    In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.

  9. New Results on the Linear Equating Methods for the Non-Equivalent-Groups Design

    ERIC Educational Resources Information Center

    von Davier, Alina A.

    2008-01-01

    The two most common observed-score equating functions are the linear and equipercentile functions. These are often seen as different methods, but von Davier, Holland, and Thayer showed that any equipercentile equating function can be decomposed into linear and nonlinear parts. They emphasized the dominant role of the linear part of the nonlinear…

  10. Evaluating convex roof entanglement measures.

    PubMed

    Tóth, Géza; Moroder, Tobias; Gühne, Otfried

    2015-04-24

    We show a powerful method to compute entanglement measures based on convex roof constructions. In particular, our method is applicable to measures that, for pure states, can be written as low order polynomials of operator expectation values. We show how to compute the linear entropy of entanglement, the linear entanglement of assistance, and a bound on the dimension of the entanglement for bipartite systems. We discuss how to obtain the convex roof of the three-tangle for three-qubit states. We also show how to calculate the linear entropy of entanglement and the quantum Fisher information based on partial information or device independent information. We demonstrate the usefulness of our method by concrete examples.

  11. Morphology filter bank for extracting nodular and linear patterns in medical images.

    PubMed

    Hashimoto, Ryutaro; Uchiyama, Yoshikazu; Uchimura, Keiichi; Koutaki, Gou; Inoue, Tomoki

    2017-04-01

    Using image processing to extract nodular or linear shadows is a key technique of computer-aided diagnosis schemes. This study proposes a new method for extracting nodular and linear patterns of various sizes in medical images. We have developed a morphology filter bank that creates multiresolution representations of an image. Analysis bank of this filter bank produces nodular and linear patterns at each resolution level. Synthesis bank can then be used to perfectly reconstruct the original image from these decomposed patterns. Our proposed method shows better performance based on a quantitative evaluation using a synthesized image compared with a conventional method based on a Hessian matrix, often used to enhance nodular and linear patterns. In addition, experiments show that our method can be applied to the followings: (1) microcalcifications of various sizes in mammograms can be extracted, (2) blood vessels of various sizes in retinal fundus images can be extracted, and (3) thoracic CT images can be reconstructed while removing normal vessels. Our proposed method is useful for extracting nodular and linear shadows or removing normal structures in medical images.

  12. Key-Generation Algorithms for Linear Piece In Hand Matrix Method

    NASA Astrophysics Data System (ADS)

    Tadaki, Kohtaro; Tsujii, Shigeo

    The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.

  13. On a new iterative method for solving linear systems and comparison results

    NASA Astrophysics Data System (ADS)

    Jing, Yan-Fei; Huang, Ting-Zhu

    2008-10-01

    In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.

  14. Non-linear dual-phase-lag model for analyzing heat transfer phenomena in living tissues during thermal ablation.

    PubMed

    Kumar, P; Kumar, Dinesh; Rai, K N

    2016-08-01

    In this article, a non-linear dual-phase-lag (DPL) bio-heat transfer model based on temperature dependent metabolic heat generation rate is derived to analyze the heat transfer phenomena in living tissues during thermal ablation treatment. The numerical solution of the present non-linear problem has been done by finite element Runge-Kutta (4,5) method which combines the essence of Runge-Kutta (4,5) method together with finite difference scheme. Our study demonstrates that at the thermal ablation position temperature predicted by non-linear and linear DPL models show significant differences. A comparison has been made among non-linear DPL, thermal wave and Pennes model and it has been found that non-linear DPL and thermal wave bio-heat model show almost same nature whereas non-linear Pennes model shows significantly different temperature profile at the initial stage of thermal ablation treatment. The effect of Fourier number and Vernotte number (relaxation Fourier number) on temperature profile in presence and absence of externally applied heat source has been studied in detail and it has been observed that the presence of externally applied heat source term highly affects the efficiency of thermal treatment method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.

    PubMed

    Choi, Jae-Seok; Kim, Munchurl

    2017-03-01

    Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower computational complexity when compared with a super-resolution method based on convolutional neural nets (SRCNN15). Compared with the previous SI method that is limited with a scale factor of 2, GLM-SI shows superior performance with average 0.79 dB higher in PSNR, and can be used for scale factors of 3 or higher.

  16. The numerical solution of linear multi-term fractional differential equations: systems of equations

    NASA Astrophysics Data System (ADS)

    Edwards, John T.; Ford, Neville J.; Simpson, A. Charles

    2002-11-01

    In this paper, we show how the numerical approximation of the solution of a linear multi-term fractional differential equation can be calculated by reduction of the problem to a system of ordinary and fractional differential equations each of order at most unity. We begin by showing how our method applies to a simple class of problems and we give a convergence result. We solve the Bagley Torvik equation as an example. We show how the method can be applied to a general linear multi-term equation and give two further examples.

  17. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  18. Making chaotic behavior in a damped linear harmonic oscillator

    NASA Astrophysics Data System (ADS)

    Konishi, Keiji

    2001-06-01

    The present Letter proposes a simple control method which makes chaotic behavior in a damped linear harmonic oscillator. This method is a modified scheme proposed in paper by Wang and Chen (IEEE CAS-I 47 (2000) 410) which presents an anti-control method for making chaotic behavior in discrete-time linear systems. We provide a systematic procedure to design parameters and sampling period of a feedback controller. Furthermore, we show that our method works well on numerical simulations.

  19. The Multiple Correspondence Analysis Method and Brain Functional Connectivity: Its Application to the Study of the Non-linear Relationships of Motor Cortex and Basal Ganglia.

    PubMed

    Rodriguez-Sabate, Clara; Morales, Ingrid; Sanchez, Alberto; Rodriguez, Manuel

    2017-01-01

    The complexity of basal ganglia (BG) interactions is often condensed into simple models mainly based on animal data and that present BG in closed-loop cortico-subcortical circuits of excitatory/inhibitory pathways which analyze the incoming cortical data and return the processed information to the cortex. This study was aimed at identifying functional relationships in the BG motor-loop of 24 healthy-subjects who provided written, informed consent and whose BOLD-activity was recorded by MRI methods. The analysis of the functional interaction between these centers by correlation techniques and multiple linear regression showed non-linear relationships which cannot be suitably addressed with these methods. The multiple correspondence analysis (MCA), an unsupervised multivariable procedure which can identify non-linear interactions, was used to study the functional connectivity of BG when subjects were at rest. Linear methods showed different functional interactions expected according to current BG models. MCA showed additional functional interactions which were not evident when using lineal methods. Seven functional configurations of BG were identified with MCA, two involving the primary motor and somatosensory cortex, one involving the deepest BG (external-internal globus pallidum, subthalamic nucleus and substantia nigral), one with the input-output BG centers (putamen and motor thalamus), two linking the input-output centers with other BG (external pallidum and subthalamic nucleus), and one linking the external pallidum and the substantia nigral. The results provide evidence that the non-linear MCA and linear methods are complementary and should be best used in conjunction to more fully understand the nature of functional connectivity of brain centers.

  20. An extended GS method for dense linear systems

    NASA Astrophysics Data System (ADS)

    Niki, Hiroshi; Kohno, Toshiyuki; Abe, Kuniyoshi

    2009-09-01

    Davey and Rosindale [K. Davey, I. Rosindale, An iterative solution scheme for systems of boundary element equations, Internat. J. Numer. Methods Engrg. 37 (1994) 1399-1411] derived the GSOR method, which uses an upper triangular matrix [Omega] in order to solve dense linear systems. By applying functional analysis, the authors presented an expression for the optimum [Omega]. Moreover, Davey and Bounds [K. Davey, S. Bounds, A generalized SOR method for dense linear systems of boundary element equations, SIAM J. Comput. 19 (1998) 953-967] also introduced further interesting results. In this note, we employ a matrix analysis approach to investigate these schemes, and derive theorems that compare these schemes with existing preconditioners for dense linear systems. We show that the convergence rate of the Gauss-Seidel method with preconditioner PG is superior to that of the GSOR method. Moreover, we define some splittings associated with the iterative schemes. Some numerical examples are reported to confirm the theoretical analysis. We show that the EGS method with preconditioner produces an extremely small spectral radius in comparison with the other schemes considered.

  1. Development and Validation of High-performance Thin Layer Chromatographic Method for Ursolic Acid in Malus domestica Peel

    PubMed Central

    Nikam, P. H.; Kareparamban, J. A.; Jadhav, A. P.; Kadam, V. J.

    2013-01-01

    Ursolic acid, a pentacyclic triterpenoid possess a wide range of pharmacological activities. It shows hypoglycemic, antiandrogenic, antibacterial, antiinflammatory, antioxidant, diuretic and cynogenic activity. It is commonly present in plants especially coating of leaves and fruits, such as apple fruit, vinca leaves, rosemary leaves, and eucalyptus leaves. A simple high-performance thin layer chromatographic method has been developed for the quantification of ursolic acid from apple peel (Malus domestica). The samples dissolved in methanol and linear ascending development was carried out in twin trough glass chamber. The mobile phase was selected as toluene:ethyl acetate:glacial acetic acid (70:30:2). The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.9982 in the concentration range 0.2-7 μg/spot with respect to peak area. According to the ICH guidelines the method was validated for linearity, accuracy, precision, and robustness. Statistical analysis of the data showed that the method is reproducible and selective for the estimation of ursolic acid. PMID:24302805

  2. Comparison between Two Linear Supervised Learning Machines' Methods with Principle Component Based Methods for the Spectrofluorimetric Determination of Agomelatine and Its Degradants.

    PubMed

    Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M

    2017-05-01

    Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.

  3. SU-C-207B-06: Comparison of Registration Methods for Modeling Pathologic Response of Esophageal Cancer to Chemoradiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riyahi, S; Choi, W; Bhooshan, N

    2016-06-15

    Purpose: To compare linear and deformable registration methods for evaluation of tumor response to Chemoradiation therapy (CRT) in patients with esophageal cancer. Methods: Linear and multi-resolution BSpline deformable registration were performed on Pre-Post-CRT CT/PET images of 20 patients with esophageal cancer. For both registration methods, we registered CT using Mean Square Error (MSE) metric, however to register PET we used transformation obtained using Mutual Information (MI) from the same CT due to being multi-modality. Similarity of Warped-CT/PET was quantitatively evaluated using Normalized Mutual Information and plausibility of DF was assessed using inverse consistency Error. To evaluate tumor response four groupsmore » of tumor features were examined: (1) Conventional PET/CT e.g. SUV, diameter (2) Clinical parameters e.g. TNM stage, histology (3)spatial-temporal PET features that describe intensity, texture and geometry of tumor (4)all features combined. Dominant features were identified using 10-fold cross-validation and Support Vector Machine (SVM) was deployed for tumor response prediction while the accuracy was evaluated by ROC Area Under Curve (AUC). Results: Average and standard deviation of Normalized mutual information for deformable registration using MSE was 0.2±0.054 and for linear registration was 0.1±0.026, showing higher NMI for deformable registration. Likewise for MI metric, deformable registration had 0.13±0.035 comparing to linear counterpart with 0.12±0.037. Inverse consistency error for deformable registration for MSE metric was 4.65±2.49 and for linear was 1.32±2.3 showing smaller value for linear registration. The same conclusion was obtained for MI in terms of inverse consistency error. AUC for both linear and deformable registration was 1 showing no absolute difference in terms of response evaluation. Conclusion: Deformable registration showed better NMI comparing to linear registration, however inverse consistency of transformation was lower in linear registration. We do not expect to see significant difference when warping PET images using deformable or linear registration. This work was supported in part by the National Cancer Institute Grants R01CA172638.« less

  4. Adjustment of Adaptive Gain with Bounded Linear Stability Analysis to Improve Time-Delay Margin for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.

  5. The method of perturbation-harmonic balance for analysing nonlinear free vibration of MDOF systems and structures

    NASA Astrophysics Data System (ADS)

    Tang, Qiangang; Sun, Shixian

    1992-03-01

    In this paper, the perturbation technique is introduced into the method of harmonic balance. A new method used for analyzing nonlinear free vibration of multidegree-of-freedom systems and structures is obtained. The form of solution is expanded into a series of small parameters and harmonics, so no term will be lost in the solution and the algebraic equations are linear. With the linear transformations, the matrices of the equations become diagonal. As soon as the modes related to linear vibration are found, the solution can be obtained. This method is superior to the method of linearized iteration. The examples show that the method has high accuracy for small-amplitude problems and the results for rather large amplitudes are satisfactory.

  6. Linear and non-linear regression analysis for the sorption kinetics of methylene blue onto activated carbon.

    PubMed

    Kumar, K Vasanth

    2006-10-11

    Batch kinetic experiments were carried out for the sorption of methylene blue onto activated carbon. The experimental kinetics were fitted to the pseudo first-order and pseudo second-order kinetics by linear and a non-linear method. The five different types of Ho pseudo second-order expression have been discussed. A comparison of linear least-squares method and a trial and error non-linear method of estimating the pseudo second-order rate kinetic parameters were examined. The sorption process was found to follow a both pseudo first-order kinetic and pseudo second-order kinetic model. Present investigation showed that it is inappropriate to use a type 1 and type pseudo second-order expressions as proposed by Ho and Blanachard et al. respectively for predicting the kinetic rate constants and the initial sorption rate for the studied system. Three correct possible alternate linear expressions (type 2 to type 4) to better predict the initial sorption rate and kinetic rate constants for the studied system (methylene blue/activated carbon) was proposed. Linear method was found to check only the hypothesis instead of verifying the kinetic model. Non-linear regression method was found to be the more appropriate method to determine the rate kinetic parameters.

  7. Verification of spectrophotometric method for nitrate analysis in water samples

    NASA Astrophysics Data System (ADS)

    Kurniawati, Puji; Gusrianti, Reny; Dwisiwi, Bledug Bernanti; Purbaningtias, Tri Esti; Wiyantoko, Bayu

    2017-12-01

    The aim of this research was to verify the spectrophotometric method to analyze nitrate in water samples using APHA 2012 Section 4500 NO3-B method. The verification parameters used were: linearity, method detection limit, level of quantitation, level of linearity, accuracy and precision. Linearity was obtained by using 0 to 50 mg/L nitrate standard solution and the correlation coefficient of standard calibration linear regression equation was 0.9981. The method detection limit (MDL) was defined as 0,1294 mg/L and limit of quantitation (LOQ) was 0,4117 mg/L. The result of a level of linearity (LOL) was 50 mg/L and nitrate concentration 10 to 50 mg/L was linear with a level of confidence was 99%. The accuracy was determined through recovery value was 109.1907%. The precision value was observed using % relative standard deviation (%RSD) from repeatability and its result was 1.0886%. The tested performance criteria showed that the methodology was verified under the laboratory conditions.

  8. Testing for nonlinearity in non-stationary physiological time series.

    PubMed

    Guarín, Diego; Delgado, Edilson; Orozco, Álvaro

    2011-01-01

    Testing for nonlinearity is one of the most important preprocessing steps in nonlinear time series analysis. Typically, this is done by means of the linear surrogate data methods. But it is a known fact that the validity of the results heavily depends on the stationarity of the time series. Since most physiological signals are non-stationary, it is easy to falsely detect nonlinearity using the linear surrogate data methods. In this document, we propose a methodology to extend the procedure for generating constrained surrogate time series in order to assess nonlinearity in non-stationary data. The method is based on the band-phase-randomized surrogates, which consists (contrary to the linear surrogate data methods) in randomizing only a portion of the Fourier phases in the high frequency domain. Analysis of simulated time series showed that in comparison to the linear surrogate data method, our method is able to discriminate between linear stationarity, linear non-stationary and nonlinear time series. Applying our methodology to heart rate variability (HRV) records of five healthy patients, we encountered that nonlinear correlations are present in this non-stationary physiological signals.

  9. Novel methods for Solving Economic Dispatch of Security-Constrained Unit Commitment Based on Linear Programming

    NASA Astrophysics Data System (ADS)

    Guo, Sangang

    2017-09-01

    There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.

  10. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    PubMed

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  11. An h-p Taylor-Galerkin finite element method for compressible Euler equations

    NASA Technical Reports Server (NTRS)

    Demkowicz, L.; Oden, J. T.; Rachowicz, W.; Hardy, O.

    1991-01-01

    An extension of the familiar Taylor-Galerkin method to arbitrary h-p spatial approximations is proposed. Boundary conditions are analyzed, and a linear stability result for arbitrary meshes is given, showing the unconditional stability for the parameter of implicitness alpha not less than 0.5. The wedge and blunt body problems are solved with both linear, quadratic, and cubic elements and h-adaptivity, showing the feasibility of higher orders of approximation for problems with shocks.

  12. Linear systems with structure group and their feedback invariants

    NASA Technical Reports Server (NTRS)

    Martin, C.; Hermann, R.

    1977-01-01

    A general method described by Hermann and Martin (1976) for the study of the feedback invariants of linear systems is considered. It is shown that this method, which makes use of ideas of topology and algebraic geometry, is very useful in the investigation of feedback problems for which the classical methods are not suitable. The transfer function as a curve in the Grassmanian is examined. The general concepts studied in the context of specific systems and applications are organized in terms of the theory of Lie groups and algebraic geometry. Attention is given to linear systems which have a structure group, linear mechanical systems, and feedback invariants. The investigation shows that Lie group techniques are powerful and useful tools for analysis of the feedback structure of linear systems.

  13. Multiple imputation of rainfall missing data in the Iberian Mediterranean context

    NASA Astrophysics Data System (ADS)

    Miró, Juan Javier; Caselles, Vicente; Estrela, María José

    2017-11-01

    Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.

  14. Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.

  15. True orbit simulation of piecewise linear and linear fractional maps of arbitrary dimension using algebraic numbers

    NASA Astrophysics Data System (ADS)

    Saito, Asaki; Yasutomi, Shin-ichi; Tamura, Jun-ichi; Ito, Shunji

    2015-06-01

    We introduce a true orbit generation method enabling exact simulations of dynamical systems defined by arbitrary-dimensional piecewise linear fractional maps, including piecewise linear maps, with rational coefficients. This method can generate sufficiently long true orbits which reproduce typical behaviors (inherent behaviors) of these systems, by properly selecting algebraic numbers in accordance with the dimension of the target system, and involving only integer arithmetic. By applying our method to three dynamical systems—that is, the baker's transformation, the map associated with a modified Jacobi-Perron algorithm, and an open flow system—we demonstrate that it can reproduce their typical behaviors that have been very difficult to reproduce with conventional simulation methods. In particular, for the first two maps, we show that we can generate true orbits displaying the same statistical properties as typical orbits, by estimating the marginal densities of their invariant measures. For the open flow system, we show that an obtained true orbit correctly converges to the stable period-1 orbit, which is inherently possessed by the system.

  16. voom: precision weights unlock linear model analysis tools for RNA-seq read counts

    PubMed Central

    2014-01-01

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods. PMID:24485249

  17. voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.

    PubMed

    Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K

    2014-02-03

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.

  18. Non-LTE line-blanketed model atmospheres of hot stars. 1: Hybrid complete linearization/accelerated lambda iteration method

    NASA Technical Reports Server (NTRS)

    Hubeny, I.; Lanz, T.

    1995-01-01

    A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.

  19. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  20. Universal Linear Fit Identification: A Method Independent of Data, Outliers and Noise Distribution Model and Free of Missing or Removed Data Imputation.

    PubMed

    Adikaram, K K L B; Hussein, M A; Effenberger, M; Becker, T

    2015-01-01

    Data processing requires a robust linear fit identification method. In this paper, we introduce a non-parametric robust linear fit identification method for time series. The method uses an indicator 2/n to identify linear fit, where n is number of terms in a series. The ratio Rmax of amax - amin and Sn - amin*n and that of Rmin of amax - amin and amax*n - Sn are always equal to 2/n, where amax is the maximum element, amin is the minimum element and Sn is the sum of all elements. If any series expected to follow y = c consists of data that do not agree with y = c form, Rmax > 2/n and Rmin > 2/n imply that the maximum and minimum elements, respectively, do not agree with linear fit. We define threshold values for outliers and noise detection as 2/n * (1 + k1) and 2/n * (1 + k2), respectively, where k1 > k2 and 0 ≤ k1 ≤ n/2 - 1. Given this relation and transformation technique, which transforms data into the form y = c, we show that removing all data that do not agree with linear fit is possible. Furthermore, the method is independent of the number of data points, missing data, removed data points and nature of distribution (Gaussian or non-Gaussian) of outliers, noise and clean data. These are major advantages over the existing linear fit methods. Since having a perfect linear relation between two variables in the real world is impossible, we used artificial data sets with extreme conditions to verify the method. The method detects the correct linear fit when the percentage of data agreeing with linear fit is less than 50%, and the deviation of data that do not agree with linear fit is very small, of the order of ±10-4%. The method results in incorrect detections only when numerical accuracy is insufficient in the calculation process.

  1. Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve

    PubMed Central

    Fong, Youyi; Yin, Shuxin; Huang, Ying

    2016-01-01

    In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981

  2. Learning linear transformations between counting-based and prediction-based word embeddings

    PubMed Central

    Hayashi, Kohei; Kawarabayashi, Ken-ichi

    2017-01-01

    Despite the growing interest in prediction-based word embedding learning methods, it remains unclear as to how the vector spaces learnt by the prediction-based methods differ from that of the counting-based methods, or whether one can be transformed into the other. To study the relationship between counting-based and prediction-based embeddings, we propose a method for learning a linear transformation between two given sets of word embeddings. Our proposal contributes to the word embedding learning research in three ways: (a) we propose an efficient method to learn a linear transformation between two sets of word embeddings, (b) using the transformation learnt in (a), we empirically show that it is possible to predict distributed word embeddings for novel unseen words, and (c) empirically it is possible to linearly transform counting-based embeddings to prediction-based embeddings, for frequent words, different POS categories, and varying degrees of ambiguities. PMID:28926629

  3. A Galerkin discretisation-based identification for parameters in nonlinear mechanical systems

    NASA Astrophysics Data System (ADS)

    Liu, Zuolin; Xu, Jian

    2018-04-01

    In the paper, a new parameter identification method is proposed for mechanical systems. Based on the idea of Galerkin finite-element method, the displacement over time history is approximated by piecewise linear functions, and the second-order terms in model equation are eliminated by integrating by parts. In this way, the lost function of integration form is derived. Being different with the existing methods, the lost function actually is a quadratic sum of integration over the whole time history. Then for linear or nonlinear systems, the optimisation of the lost function can be applied with traditional least-squares algorithm or the iterative one, respectively. Such method could be used to effectively identify parameters in linear and arbitrary nonlinear mechanical systems. Simulation results show that even under the condition of sparse data or low sampling frequency, this method could still guarantee high accuracy in identifying linear and nonlinear parameters.

  4. A new linear least squares method for T1 estimation from SPGR signals with multiple TRs

    NASA Astrophysics Data System (ADS)

    Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J.; Pierpaoli, Carlo

    2009-02-01

    The longitudinal relaxation time, T1, can be estimated from two or more spoiled gradient recalled echo x (SPGR) images with two or more flip angles and one or more repetition times (TRs). The function relating signal intensity and the parameters are nonlinear; T1 maps can be computed from SPGR signals using nonlinear least squares regression. A widely-used linear method transforms the nonlinear model by assuming a fixed TR in SPGR images. This constraint is not desirable since multiple TRs are a clinically practical way to reduce the total acquisition time, to satisfy the required resolution, and/or to combine SPGR data acquired at different times. A new linear least squares method is proposed using the first order Taylor expansion. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy and precision of the estimated T1 from the proposed linear and the nonlinear methods. We show that the new linear least squares method provides T1 estimates comparable in both precision and accuracy to those from the nonlinear method, allowing multiple TRs and reducing computation time significantly.

  5. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  6. Kernel-imbedded Gaussian processes for disease classification using microarray gene expression data

    PubMed Central

    Zhao, Xin; Cheung, Leo Wang-Kit

    2007-01-01

    Background Designing appropriate machine learning methods for identifying genes that have a significant discriminating power for disease outcomes has become more and more important for our understanding of diseases at genomic level. Although many machine learning methods have been developed and applied to the area of microarray gene expression data analysis, the majority of them are based on linear models, which however are not necessarily appropriate for the underlying connection between the target disease and its associated explanatory genes. Linear model based methods usually also bring in false positive significant features more easily. Furthermore, linear model based algorithms often involve calculating the inverse of a matrix that is possibly singular when the number of potentially important genes is relatively large. This leads to problems of numerical instability. To overcome these limitations, a few non-linear methods have recently been introduced to the area. Many of the existing non-linear methods have a couple of critical problems, the model selection problem and the model parameter tuning problem, that remain unsolved or even untouched. In general, a unified framework that allows model parameters of both linear and non-linear models to be easily tuned is always preferred in real-world applications. Kernel-induced learning methods form a class of approaches that show promising potentials to achieve this goal. Results A hierarchical statistical model named kernel-imbedded Gaussian process (KIGP) is developed under a unified Bayesian framework for binary disease classification problems using microarray gene expression data. In particular, based on a probit regression setting, an adaptive algorithm with a cascading structure is designed to find the appropriate kernel, to discover the potentially significant genes, and to make the optimal class prediction accordingly. A Gibbs sampler is built as the core of the algorithm to make Bayesian inferences. Simulation studies showed that, even without any knowledge of the underlying generative model, the KIGP performed very close to the theoretical Bayesian bound not only in the case with a linear Bayesian classifier but also in the case with a very non-linear Bayesian classifier. This sheds light on its broader usability to microarray data analysis problems, especially to those that linear methods work awkwardly. The KIGP was also applied to four published microarray datasets, and the results showed that the KIGP performed better than or at least as well as any of the referred state-of-the-art methods did in all of these cases. Conclusion Mathematically built on the kernel-induced feature space concept under a Bayesian framework, the KIGP method presented in this paper provides a unified machine learning approach to explore both the linear and the possibly non-linear underlying relationship between the target features of a given binary disease classification problem and the related explanatory gene expression data. More importantly, it incorporates the model parameter tuning into the framework. The model selection problem is addressed in the form of selecting a proper kernel type. The KIGP method also gives Bayesian probabilistic predictions for disease classification. These properties and features are beneficial to most real-world applications. The algorithm is naturally robust in numerical computation. The simulation studies and the published data studies demonstrated that the proposed KIGP performs satisfactorily and consistently. PMID:17328811

  7. Supervised linear dimensionality reduction with robust margins for object recognition

    NASA Astrophysics Data System (ADS)

    Dornaika, F.; Assoum, A.

    2013-01-01

    Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.

  8. A new Newton-like method for solving nonlinear equations.

    PubMed

    Saheya, B; Chen, Guo-Qing; Sui, Yun-Kang; Wu, Cai-Ying

    2016-01-01

    This paper presents an iterative scheme for solving nonline ar equations. We establish a new rational approximation model with linear numerator and denominator which has generalizes the local linear model. We then employ the new approximation for nonlinear equations and propose an improved Newton's method to solve it. The new method revises the Jacobian matrix by a rank one matrix each iteration and obtains the quadratic convergence property. The numerical performance and comparison show that the proposed method is efficient.

  9. A Very Fast and Angular Momentum Conserving Tree Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcello, Dominic C., E-mail: dmarce504@gmail.com

    There are many methods used to compute the classical gravitational field in astrophysical simulation codes. With the exception of the typically impractical method of direct computation, none ensure conservation of angular momentum to machine precision. Under uniform time-stepping, the Cartesian fast multipole method of Dehnen (also known as the very fast tree code) conserves linear momentum to machine precision. We show that it is possible to modify this method in a way that conserves both angular and linear momenta.

  10. Estimation of group means when adjusting for covariates in generalized linear models.

    PubMed

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  11. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    NASA Astrophysics Data System (ADS)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  12. Sustained modelling ability of artificial neural networks in the analysis of two pharmaceuticals (dextropropoxyphene and dipyrone) present in unequal concentrations.

    PubMed

    Cámara, María S; Ferroni, Félix M; De Zan, Mercedes; Goicoechea, Héctor C

    2003-07-01

    An improvement is presented on the simultaneous determination of two active ingredients present in unequal concentrations in injections. The analysis was carried out with spectrophotometric data and non-linear multivariate calibration methods, in particular artificial neural networks (ANNs). The presence of non-linearities caused by the major analyte concentrations which deviate from Beer's law was confirmed by plotting actual vs. predicted concentrations, and observing curvatures in the residuals for the estimated concentrations with linear methods. Mixtures of dextropropoxyphene and dipyrone have been analysed by using linear and non-linear partial least-squares (PLS and NPLSs) and ANNs. Notwithstanding the high degree of spectral overlap and the occurrence of non-linearities, rapid and simultaneous analysis has been achieved, with reasonably good accuracy and precision. A commercial sample was analysed by using the present methodology, and the obtained results show reasonably good agreement with those obtained by using high-performance liquid chromatography (HPLC) and a UV-spectrophotometric comparative methods.

  13. A new implementation of the CMRH method for solving dense linear systems

    NASA Astrophysics Data System (ADS)

    Heyouni, M.; Sadok, H.

    2008-04-01

    The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.

  14. ADM For Solving Linear Second-Order Fredholm Integro-Differential Equations

    NASA Astrophysics Data System (ADS)

    Karim, Mohd F.; Mohamad, Mahathir; Saifullah Rusiman, Mohd; Che-Him, Norziha; Roslan, Rozaini; Khalid, Kamil

    2018-04-01

    In this paper, we apply Adomian Decomposition Method (ADM) as numerically analyse linear second-order Fredholm Integro-differential Equations. The approximate solutions of the problems are calculated by Maple package. Some numerical examples have been considered to illustrate the ADM for solving this equation. The results are compared with the existing exact solution. Thus, the Adomian decomposition method can be the best alternative method for solving linear second-order Fredholm Integro-Differential equation. It converges to the exact solution quickly and in the same time reduces computational work for solving the equation. The result obtained by ADM shows the ability and efficiency for solving these equations.

  15. A penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography.

    PubMed

    Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn

    2007-01-01

    Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.

  16. Linear reduction method for predictive and informative tag SNP selection.

    PubMed

    He, Jingwu; Westbrooks, Kelly; Zelikovsky, Alexander

    2005-01-01

    Constructing a complete human haplotype map is helpful when associating complex diseases with their related SNPs. Unfortunately, the number of SNPs is very large and it is costly to sequence many individuals. Therefore, it is desirable to reduce the number of SNPs that should be sequenced to a small number of informative representatives called tag SNPs. In this paper, we propose a new linear algebra-based method for selecting and using tag SNPs. We measure the quality of our tag SNP selection algorithm by comparing actual SNPs with SNPs predicted from selected linearly independent tag SNPs. Our experiments show that for sufficiently long haplotypes, knowing only 0.4% of all SNPs the proposed linear reduction method predicts an unknown haplotype with the error rate below 2% based on 10% of the population.

  17. Estimation of reflectance from camera responses by the regularized local linear model.

    PubMed

    Zhang, Wei-Feng; Tang, Gongguo; Dai, Dao-Qing; Nehorai, Arye

    2011-10-01

    Because of the limited approximation capability of using fixed basis functions, the performance of reflectance estimation obtained by traditional linear models will not be optimal. We propose an approach based on the regularized local linear model. Our approach performs efficiently and knowledge of the spectral power distribution of the illuminant and the spectral sensitivities of the camera is not needed. Experimental results show that the proposed method performs better than some well-known methods in terms of both reflectance error and colorimetric error. © 2011 Optical Society of America

  18. A novel recurrent neural network with finite-time convergence for linear programming.

    PubMed

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  19. Comparison of Linear and Non-linear Regression Analysis to Determine Pulmonary Pressure in Hyperthyroidism.

    PubMed

    Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan

    2017-01-01

    This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second degree where the parabola is its graphical representation.

  20. Transfer matrix method for dynamics modeling and independent modal space vibration control design of linear hybrid multibody system

    NASA Astrophysics Data System (ADS)

    Rong, Bao; Rui, Xiaoting; Lu, Kun; Tao, Ling; Wang, Guoping; Ni, Xiaojun

    2018-05-01

    In this paper, an efficient method of dynamics modeling and vibration control design of a linear hybrid multibody system (MS) is studied based on the transfer matrix method. The natural vibration characteristics of a linear hybrid MS are solved by using low-order transfer equations. Then, by constructing the brand-new body dynamics equation, augmented operator and augmented eigenvector, the orthogonality of augmented eigenvector of a linear hybrid MS is satisfied, and its state space model expressed in each independent model space is obtained easily. According to this dynamics model, a robust independent modal space-fuzzy controller is designed for vibration control of a general MS, and the genetic optimization of some critical control parameters of fuzzy tuners is also presented. Two illustrative examples are performed, which results show that this method is computationally efficient and with perfect control performance.

  1. Numerical investigation of multi-beam laser heterodyne measurement with ultra-precision for linear expansion coefficient of metal based on oscillating mirror modulation

    NASA Astrophysics Data System (ADS)

    Li, Yan-Chao; Wang, Chun-Hui; Qu, Yang; Gao, Long; Cong, Hai-Fang; Yang, Yan-Ling; Gao, Jie; Wang, Ao-You

    2011-01-01

    This paper proposes a novel method of multi-beam laser heterodyne measurement for metal linear expansion coefficient. Based on the Doppler effect and heterodyne technology, the information is loaded of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by temperature variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain the value of linear expansion coefficient of metal by the calculation. This novel method is used to simulate measurement for linear expansion coefficient of metal rod under different temperatures by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.4%.

  2. Physiological processes non-linearly affect electrophysiological recordings during transcranial electric stimulation.

    PubMed

    Noury, Nima; Hipp, Joerg F; Siegel, Markus

    2016-10-15

    Transcranial electric stimulation (tES) is a promising tool to non-invasively manipulate neuronal activity in the human brain. Several studies have shown behavioral effects of tES, but stimulation artifacts complicate the simultaneous investigation of neural activity with EEG or MEG. Here, we first show for EEG and MEG, that contrary to previous assumptions, artifacts do not simply reflect stimulation currents, but that heartbeat and respiration non-linearly modulate stimulation artifacts. These modulations occur irrespective of the stimulation frequency, i.e. during both transcranial alternating and direct current stimulations (tACS and tDCS). Second, we show that, although at first sight previously employed artifact rejection methods may seem to remove artifacts, data are still contaminated by non-linear stimulation artifacts. Because of their complex nature and dependence on the subjects' physiological state, these artifacts are prone to be mistaken as neural entrainment. In sum, our results uncover non-linear tES artifacts, show that current techniques fail to fully remove them, and pave the way for new artifact rejection methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Comparing machine learning and logistic regression methods for predicting hypertension using a combination of gene expression and next-generation sequencing data.

    PubMed

    Held, Elizabeth; Cape, Joshua; Tintle, Nathan

    2016-01-01

    Machine learning methods continue to show promise in the analysis of data from genetic association studies because of the high number of variables relative to the number of observations. However, few best practices exist for the application of these methods. We extend a recently proposed supervised machine learning approach for predicting disease risk by genotypes to be able to incorporate gene expression data and rare variants. We then apply 2 different versions of the approach (radial and linear support vector machines) to simulated data from Genetic Analysis Workshop 19 and compare performance to logistic regression. Method performance was not radically different across the 3 methods, although the linear support vector machine tended to show small gains in predictive ability relative to a radial support vector machine and logistic regression. Importantly, as the number of genes in the models was increased, even when those genes contained causal rare variants, model predictive ability showed a statistically significant decrease in performance for both the radial support vector machine and logistic regression. The linear support vector machine showed more robust performance to the inclusion of additional genes. Further work is needed to evaluate machine learning approaches on larger samples and to evaluate the relative improvement in model prediction from the incorporation of gene expression data.

  4. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization

    PubMed Central

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence—with at most a linear convergence rate—because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. PMID:26381742

  5. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization.

    PubMed

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence--with at most a linear convergence rate--because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method.

  6. Estimating linear effects in ANOVA designs: the easy way.

    PubMed

    Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana

    2012-09-01

    Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.

  7. Performance of Nonlinear Finite-Difference Poisson-Boltzmann Solvers

    PubMed Central

    Cai, Qin; Hsieh, Meng-Juei; Wang, Jun; Luo, Ray

    2014-01-01

    We implemented and optimized seven finite-difference solvers for the full nonlinear Poisson-Boltzmann equation in biomolecular applications, including four relaxation methods, one conjugate gradient method, and two inexact Newton methods. The performance of the seven solvers was extensively evaluated with a large number of nucleic acids and proteins. Worth noting is the inexact Newton method in our analysis. We investigated the role of linear solvers in its performance by incorporating the incomplete Cholesky conjugate gradient and the geometric multigrid into its inner linear loop. We tailored and optimized both linear solvers for faster convergence rate. In addition, we explored strategies to optimize the successive over-relaxation method to reduce its convergence failures without too much sacrifice in its convergence rate. Specifically we attempted to adaptively change the relaxation parameter and to utilize the damping strategy from the inexact Newton method to improve the successive over-relaxation method. Our analysis shows that the nonlinear methods accompanied with a functional-assisted strategy, such as the conjugate gradient method and the inexact Newton method, can guarantee convergence in the tested molecules. Especially the inexact Newton method exhibits impressive performance when it is combined with highly efficient linear solvers that are tailored for its special requirement. PMID:24723843

  8. Stability indicating high performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in combined dosage form

    PubMed Central

    Bageshwar, Deepak; Khanvilkar, Vineeta; Kadam, Vilasrao

    2011-01-01

    A specific, precise and stability indicating high-performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in pharmaceutical formulations was developed and validated. The method employed TLC aluminium plates precoated with silica gel 60F254 as the stationary phase. The solvent system consisted of methanol:water:ammonium acetate; 4.0:1.0:0.5 (v/v/v). This system was found to give compact and dense spots for both itopride hydrochloride (Rf value of 0.55±0.02) and pantoprazole sodium (Rf value of 0.85±0.04). Densitometric analysis of both drugs was carried out in the reflectance–absorbance mode at 289 nm. The linear regression analysis data for the calibration plots showed a good linear relationship with R2=0.9988±0.0012 in the concentration range of 100–400 ng for pantoprazole sodium. Also, the linear regression analysis data for the calibration plots showed a good linear relationship with R2=0.9990±0.0008 in the concentration range of 200–1200 ng for itopride hydrochloride. The method was validated for specificity, precision, robustness and recovery. Statistical analysis proves that the method is repeatable and selective for the estimation of both the said drugs. As the method could effectively separate the drug from its degradation products, it can be employed as a stability indicating method. PMID:29403710

  9. Stability indicating high performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in combined dosage form.

    PubMed

    Bageshwar, Deepak; Khanvilkar, Vineeta; Kadam, Vilasrao

    2011-11-01

    A specific, precise and stability indicating high-performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in pharmaceutical formulations was developed and validated. The method employed TLC aluminium plates precoated with silica gel 60F 254 as the stationary phase. The solvent system consisted of methanol:water:ammonium acetate; 4.0:1.0:0.5 (v/v/v). This system was found to give compact and dense spots for both itopride hydrochloride ( R f value of 0.55±0.02) and pantoprazole sodium ( R f value of 0.85±0.04). Densitometric analysis of both drugs was carried out in the reflectance-absorbance mode at 289 nm. The linear regression analysis data for the calibration plots showed a good linear relationship with R 2 =0.9988±0.0012 in the concentration range of 100-400 ng for pantoprazole sodium. Also, the linear regression analysis data for the calibration plots showed a good linear relationship with R 2 =0.9990±0.0008 in the concentration range of 200-1200 ng for itopride hydrochloride. The method was validated for specificity, precision, robustness and recovery. Statistical analysis proves that the method is repeatable and selective for the estimation of both the said drugs. As the method could effectively separate the drug from its degradation products, it can be employed as a stability indicating method.

  10. Non-Linear System Identification for Aeroelastic Systems with Application to Experimental Data

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2008-01-01

    Representation and identification of a non-linear aeroelastic pitch-plunge system as a model of the NARMAX class is considered. A non-linear difference equation describing this aircraft model is derived theoretically and shown to be of the NARMAX form. Identification methods for NARMAX models are applied to aeroelastic dynamics and its properties demonstrated via continuous-time simulations of experimental conditions. Simulation results show that (i) the outputs of the NARMAX model match closely those generated using continuous-time methods and (ii) NARMAX identification methods applied to aeroelastic dynamics provide accurate discrete-time parameter estimates. Application of NARMAX identification to experimental pitch-plunge dynamics data gives a high percent fit for cross-validated data.

  11. Application of linearized inverse scattering methods for the inspection in steel plates embedded in concrete structures

    NASA Astrophysics Data System (ADS)

    Tsunoda, Takaya; Suzuki, Keigo; Saitoh, Takahiro

    2018-04-01

    This study develops a method to visualize the state of steel-concrete interface with ultrasonic testing. Scattered waves are obtained by the UT pitch-catch mode from the surface of the concrete. Discrete wavelet transform is applied in order to extract echoes scattered from the steel-concrete interface. Then Linearized Inverse Scattering Methods are used for imaging the interface. The results show that LISM with Born and Kirchhoff approximation provide clear images for the target.

  12. Comparative Study of SVM Methods Combined with Voxel Selection for Object Category Classification on fMRI Data

    PubMed Central

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-01-01

    Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice. PMID:21359184

  13. Comparative study of SVM methods combined with voxel selection for object category classification on fMRI data.

    PubMed

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-02-16

    Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.

  14. Computing the Evans function via solving a linear boundary value ODE

    NASA Astrophysics Data System (ADS)

    Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn

    2015-11-01

    Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.

  15. Prediction of Undsteady Flows in Turbomachinery Using the Linearized Euler Equations on Deforming Grids

    NASA Technical Reports Server (NTRS)

    Clark, William S.; Hall, Kenneth C.

    1994-01-01

    A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.

  16. On High-Order Upwind Methods for Advection

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    2017-01-01

    Scheme III (piecewise linear) and V (piecewise parabolic) of Van Leer are shown to yield identical solutions provided the initial conditions are chosen in an appropriate manner. This result is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The result also shows a key connection between the approaches of discontinuous and continuous representations.

  17. Investigation of ODE integrators using interactive graphics. [Ordinary Differential Equations

    NASA Technical Reports Server (NTRS)

    Brown, R. L.

    1978-01-01

    Two FORTRAN programs using an interactive graphic terminal to generate accuracy and stability plots for given multistep ordinary differential equation (ODE) integrators are described. The first treats the fixed stepsize linear case with complex variable solutions, and generates plots to show accuracy and error response to step driving function of a numerical solution, as well as the linear stability region. The second generates an analog to the stability region for classes of non-linear ODE's as well as accuracy plots. Both systems can compute method coefficients from a simple specification of the method. Example plots are given.

  18. Feasibility of combining linear theory and impact theory methods for the analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1978-01-01

    The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.

  19. An exact noniterative linear method for locating sources based on measuring receiver arrival times.

    PubMed

    Militello, C; Buenafuente, S R

    2007-06-01

    In this paper an exact, linear solution to the source localization problem based on the time of arrival at the receivers is presented. The method is unique in that the source's position can be obtained by solving a system of linear equations, three for a plane and four for a volume. This simplification means adding an additional receiver to the minimum mathematically required (3+1 in two dimensions and 4+1 in three dimensions). The equations are easily worked out for any receiver configuration and their geometrical interpretation is straightforward. Unlike other methods, the system of reference used to describe the receivers' positions is completely arbitrary. The relationship between this method and previously published ones is discussed, showing how the present, more general, method overcomes nonlinearity and unknown dependency issues.

  20. Two new modified Gauss-Seidel methods for linear system with M-matrices

    NASA Astrophysics Data System (ADS)

    Zheng, Bing; Miao, Shu-Xin

    2009-12-01

    In 2002, H. Kotakemori et al. proposed the modified Gauss-Seidel (MGS) method for solving the linear system with the preconditioner [H. Kotakemori, K. Harada, M. Morimoto, H. Niki, A comparison theorem for the iterative method with the preconditioner () J. Comput. Appl. Math. 145 (2002) 373-378]. Since this preconditioner is constructed by only the largest element on each row of the upper triangular part of the coefficient matrix, the preconditioning effect is not observed on the nth row. In the present paper, to deal with this drawback, we propose two new preconditioners. The convergence and comparison theorems of the modified Gauss-Seidel methods with these two preconditioners for solving the linear system are established. The convergence rates of the new proposed preconditioned methods are compared. In addition, numerical experiments are used to show the effectiveness of the new MGS methods.

  1. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning

    PubMed Central

    Gönen, Mehmet

    2014-01-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks. PMID:24532862

  2. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning.

    PubMed

    Gönen, Mehmet

    2014-03-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.

  3. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  4. New adaptive method to optimize the secondary reflector of linear Fresnel collectors

    DOE PAGES

    Zhu, Guangdong

    2017-01-16

    Performance of linear Fresnel collectors may largely depend on the secondary-reflector profile design when small-aperture absorbers are used. Optimization of the secondary-reflector profile is an extremely challenging task because there is no established theory to ensure superior performance of derived profiles. In this work, an innovative optimization method is proposed to optimize the secondary-reflector profile of a generic linear Fresnel configuration. The method correctly and accurately captures impacts of both geometric and optical aspects of a linear Fresnel collector to secondary-reflector design. The proposed method is an adaptive approach that does not assume a secondary shape of any particular form,more » but rather, starts at a single edge point and adaptively constructs the next surface point to maximize the reflected power to be reflected to absorber(s). As a test case, the proposed optimization method is applied to an industrial linear Fresnel configuration, and the results show that the derived optimal secondary reflector is able to redirect more than 90% of the power to the absorber in a wide range of incidence angles. Here, the proposed method can be naturally extended to other types of solar collectors as well, and it will be a valuable tool for solar-collector designs with a secondary reflector.« less

  5. New adaptive method to optimize the secondary reflector of linear Fresnel collectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Guangdong

    Performance of linear Fresnel collectors may largely depend on the secondary-reflector profile design when small-aperture absorbers are used. Optimization of the secondary-reflector profile is an extremely challenging task because there is no established theory to ensure superior performance of derived profiles. In this work, an innovative optimization method is proposed to optimize the secondary-reflector profile of a generic linear Fresnel configuration. The method correctly and accurately captures impacts of both geometric and optical aspects of a linear Fresnel collector to secondary-reflector design. The proposed method is an adaptive approach that does not assume a secondary shape of any particular form,more » but rather, starts at a single edge point and adaptively constructs the next surface point to maximize the reflected power to be reflected to absorber(s). As a test case, the proposed optimization method is applied to an industrial linear Fresnel configuration, and the results show that the derived optimal secondary reflector is able to redirect more than 90% of the power to the absorber in a wide range of incidence angles. Here, the proposed method can be naturally extended to other types of solar collectors as well, and it will be a valuable tool for solar-collector designs with a secondary reflector.« less

  6. Global optimization algorithm for heat exchanger networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesada, I.; Grossmann, I.E.

    This paper deals with the global optimization of heat exchanger networks with fixed topology. It is shown that if linear area cost functions are assumed, as well as arithmetic mean driving force temperature differences in networks with isothermal mixing, the corresponding nonlinear programming (NLP) optimization problem involves linear constraints and a sum of linear fractional functions in the objective which are nonconvex. A rigorous algorithm is proposed that is based on a convex NLP underestimator that involves linear and nonlinear estimators for fractional and bilinear terms which provide a tight lower bound to the global optimum. This NLP problem ismore » used within a spatial branch and bound method for which branching rules are given. Basic properties of the proposed method are presented, and its application is illustrated with several example problems. The results show that the proposed method only requires few nodes in the branch and bound search.« less

  7. Students’ difficulties in solving linear equation problems

    NASA Astrophysics Data System (ADS)

    Wati, S.; Fitriana, L.; Mardiyana

    2018-03-01

    A linear equation is an algebra material that exists in junior high school to university. It is a very important material for students in order to learn more advanced mathematics topics. Therefore, linear equation material is essential to be mastered. However, the result of 2016 national examination in Indonesia showed that students’ achievement in solving linear equation problem was low. This fact became a background to investigate students’ difficulties in solving linear equation problems. This study used qualitative descriptive method. An individual written test on linear equation tasks was administered, followed by interviews. Twenty-one sample students of grade VIII of SMPIT Insan Kamil Karanganyar did the written test, and 6 of them were interviewed afterward. The result showed that students with high mathematics achievement donot have difficulties, students with medium mathematics achievement have factual difficulties, and students with low mathematics achievement have factual, conceptual, operational, and principle difficulties. Based on the result there is a need of meaningfulness teaching strategy to help students to overcome difficulties in solving linear equation problems.

  8. Multimodal Deep Autoencoder for Human Pose Recovery.

    PubMed

    Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng

    2015-12-01

    Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.

  9. Early Dose Response to Yttrium-90 Microsphere Treatment of Metastatic Liver Cancer by a Patient-Specific Method Using Single Photon Emission Computed Tomography and Positron Emission Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Janice M.; Department of Radiation Oncology, Wayne State University, Detroit, MI; Wong, C. Oliver

    2009-05-01

    Purpose: To evaluate a patient-specific single photon emission computed tomography (SPECT)-based method of dose calculation for treatment planning of yttrium-90 ({sup 90}Y) microsphere selective internal radiotherapy (SIRT). Methods and Materials: Fourteen consecutive {sup 90}Y SIRTs for colorectal liver metastasis were retrospectively analyzed. Absorbed dose to tumor and normal liver tissue was calculated by partition methods with two different tumor/normal liver vascularity ratios: an average 3:1 and a patient-specific ratio derived from pretreatment technetium-99m macroaggregated albumin SPECT. Tumor response was quantitatively evaluated from fluorine-18 fluoro-2-deoxy-D-glucose positron emission tomography scans. Results: Positron emission tomography showed a significant decrease in total tumor standardizedmore » uptake value (average, 52%). There was a significant difference in the tumor absorbed dose between the average and specific methods (p = 0.009). Response vs. dose curves fit by linear and linear-quadratic modeling showed similar results. Linear fit r values increased for all tumor response parameters with the specific method (+0.20 for mean standardized uptake value). Conclusion: Tumor dose calculated with the patient-specific method was more predictive of response in liver-directed {sup 90}Y SIRT.« less

  10. Balancing Chemical Reactions With Matrix Methods and Computer Assistance. Applications of Linear Algebra to Chemistry. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 339.

    ERIC Educational Resources Information Center

    Grimaldi, Ralph P.

    This material was developed to provide an application of matrix mathematics in chemistry, and to show the concepts of linear independence and dependence in vector spaces of dimensions greater than three in a concrete setting. The techniques presented are not intended to be considered as replacements for such chemical methods as oxidation-reduction…

  11. Sampling Based Influence Maximization on Linear Threshold Model

    NASA Astrophysics Data System (ADS)

    Jia, Su; Chen, Ling

    2018-04-01

    A sampling based influence maximization on linear threshold (LT) model method is presented. The method samples the routes in the possible worlds in the social networks, and uses Chernoff bound to estimate the number of samples so that the error can be constrained within a given bound. Then the active possibilities of the routes in the possible worlds are calculated, and are used to compute the influence spread of each node in the network. Our experimental results show that our method can effectively select appropriate seed nodes set that spreads larger influence than other similar methods.

  12. Estimation of parameters in rational reaction rates of molecular biological systems via weighted least squares

    NASA Astrophysics Data System (ADS)

    Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke

    2010-01-01

    The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.

  13. Computation of nonlinear ultrasound fields using a linearized contrast source method.

    PubMed

    Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A

    2013-08-01

    Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.

  14. Application of a microplate-based ORAC-pyrogallol red assay for the estimation ofantioxidant capacity: First Action 2012.03.

    PubMed

    Ortiz, Rocío; Antilén, Mónica; Speisky, Hernán; Aliaga, Margarita E; López-Alarcón, Camilo; Baugh, Steve

    2012-01-01

    A method was developed for microplate-based oxygen radicals absorbance capacity (ORAC) using pyrogallol red (PGR) as probe (ORAC-PGR). The method was evaluated for linearity, precision, and accuracy. In addition, the antioxidant capacity of commercial beverages, such as wines, fruit juices, and iced teas, was measured. Linearity of the area under the curve (AUC) versus Trolox concentration plots was [AUC = (845 +/- 110) + (23 +/- 2) [Trolox, microM]; R = 0.9961, n = 19]. Analyses showed better precision and accuracy at the highest Trolox concentration (40 microM) with RSD and recovery (REC) values of 1.7 and 101.0%, respectively. The method also showed good linearity for red wine [AUC = (787 +/- 77) + (690 +/- 60) [red wine, microL/mL]; R = 0.9926, n = 17], precision and accuracy with RSD values from 1.4 to 8.3%, and REC values that ranged from 89.7 to 103.8%. Red wines showed higher ORAC-PGR values than white wines, while the ORAC-PGR index of fruit juices and iced teas presented a wide range of results, from 0.6 to 21.6 mM of Trolox equivalents. Product-to-product variability was also observed for juices of the same fruit, showing the differences between brands on the ORAC-PGR index.

  15. Scilab software as an alternative low-cost computing in solving the linear equations problem

    NASA Astrophysics Data System (ADS)

    Agus, Fahrul; Haviluddin

    2017-02-01

    Numerical computation packages are widely used both in teaching and research. These packages consist of license (proprietary) and open source software (non-proprietary). One of the reasons to use the package is a complexity of mathematics function (i.e., linear problems). Also, number of variables in a linear or non-linear function has been increased. The aim of this paper was to reflect on key aspects related to the method, didactics and creative praxis in the teaching of linear equations in higher education. If implemented, it could be contribute to a better learning in mathematics area (i.e., solving simultaneous linear equations) that essential for future engineers. The focus of this study was to introduce an additional numerical computation package of Scilab as an alternative low-cost computing programming. In this paper, Scilab software was proposed some activities that related to the mathematical models. In this experiment, four numerical methods such as Gaussian Elimination, Gauss-Jordan, Inverse Matrix, and Lower-Upper Decomposition (LU) have been implemented. The results of this study showed that a routine or procedure in numerical methods have been created and explored by using Scilab procedures. Then, the routine of numerical method that could be as a teaching material course has exploited.

  16. Preconditioned alternating direction method of multipliers for inverse problems with constraints

    NASA Astrophysics Data System (ADS)

    Jiao, Yuling; Jin, Qinian; Lu, Xiliang; Wang, Weijie

    2017-02-01

    We propose a preconditioned alternating direction method of multipliers (ADMM) to solve linear inverse problems in Hilbert spaces with constraints, where the feature of the sought solution under a linear transformation is captured by a possibly non-smooth convex function. During each iteration step, our method avoids solving large linear systems by choosing a suitable preconditioning operator. In case the data is given exactly, we prove the convergence of our preconditioned ADMM without assuming the existence of a Lagrange multiplier. In case the data is corrupted by noise, we propose a stopping rule using information on noise level and show that our preconditioned ADMM is a regularization method; we also propose a heuristic rule when the information on noise level is unavailable or unreliable and give its detailed analysis. Numerical examples are presented to test the performance of the proposed method.

  17. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less

  18. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    DOE PAGES

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    2017-06-23

    We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less

  19. What is the best method for assessing lower limb force-velocity relationship?

    PubMed

    Giroux, C; Rabita, G; Chollet, D; Guilhem, G

    2015-02-01

    This study determined the concurrent validity and reliability of force, velocity and power measurements provided by accelerometry, linear position transducer and Samozino's methods, during loaded squat jumps. 17 subjects performed squat jumps on 2 separate occasions in 7 loading conditions (0-60% of the maximal concentric load). Force, velocity and power patterns were averaged over the push-off phase using accelerometry, linear position transducer and a method based on key positions measurements during squat jump, and compared to force plate measurements. Concurrent validity analyses indicated very good agreement with the reference method (CV=6.4-14.5%). Force, velocity and power patterns comparison confirmed the agreement with slight differences for high-velocity movements. The validity of measurements was equivalent for all tested methods (r=0.87-0.98). Bland-Altman plots showed a lower agreement for velocity and power compared to force. Mean force, velocity and power were reliable for all methods (ICC=0.84-0.99), especially for Samozino's method (CV=2.7-8.6%). Our findings showed that present methods are valid and reliable in different loading conditions and permit between-session comparisons and characterization of training-induced effects. While linear position transducer and accelerometer allow for examining the whole time-course of kinetic patterns, Samozino's method benefits from a better reliability and ease of processing. © Georg Thieme Verlag KG Stuttgart · New York.

  20. Least median of squares and iteratively re-weighted least squares as robust linear regression methods for fluorimetric determination of α-lipoic acid in capsules in ideal and non-ideal cases of linearity.

    PubMed

    Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F

    2018-06-01

    This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.

  1. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  2. Bounded Linear Stability Margin Analysis of Nonlinear Hybrid Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Boskovic, Jovan D.

    2008-01-01

    This paper presents a bounded linear stability analysis for a hybrid adaptive control that blends both direct and indirect adaptive control. Stability and convergence of nonlinear adaptive control are analyzed using an approximate linear equivalent system. A stability margin analysis shows that a large adaptive gain can lead to a reduced phase margin. This method can enable metrics-driven adaptive control whereby the adaptive gain is adjusted to meet stability margin requirements.

  3. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  4. A new preconditioner update strategy for the solution of sequences of linear systems in structural mechanics: application to saddle point problems in elasticity

    NASA Astrophysics Data System (ADS)

    Mercier, Sylvain; Gratton, Serge; Tardieu, Nicolas; Vasseur, Xavier

    2017-12-01

    Many applications in structural mechanics require the numerical solution of sequences of linear systems typically issued from a finite element discretization of the governing equations on fine meshes. The method of Lagrange multipliers is often used to take into account mechanical constraints. The resulting matrices then exhibit a saddle point structure and the iterative solution of such preconditioned linear systems is considered as challenging. A popular strategy is then to combine preconditioning and deflation to yield an efficient method. We propose an alternative that is applicable to the general case and not only to matrices with a saddle point structure. In this approach, we consider to update an existing algebraic or application-based preconditioner, using specific available information exploiting the knowledge of an approximate invariant subspace or of matrix-vector products. The resulting preconditioner has the form of a limited memory quasi-Newton matrix and requires a small number of linearly independent vectors. Numerical experiments performed on three large-scale applications in elasticity highlight the relevance of the new approach. We show that the proposed method outperforms the deflation method when considering sequences of linear systems with varying matrices.

  5. A shape-based quality evaluation and reconstruction method for electrical impedance tomography.

    PubMed

    Antink, Christoph Hoog; Pikkemaat, Robert; Malmivuo, Jaakko; Leonhardt, Steffen

    2015-06-01

    Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community. In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed. Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images.

  6. A Bayes linear Bayes method for estimation of correlated event rates.

    PubMed

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  7. Reconstructing baryon oscillations: A Lagrangian theory perspective

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Cohn, J. D.

    2009-03-01

    Recently Eisenstein and collaborators introduced a method to “reconstruct” the linear power spectrum from a nonlinearly evolved galaxy distribution in order to improve precision in measurements of baryon acoustic oscillations. We reformulate this method within the Lagrangian picture of structure formation, to better understand what such a method does, and what the resulting power spectra are. We show that reconstruction does not reproduce the linear density field, at second order. We however show that it does reduce the damping of the oscillations due to nonlinear structure formation, explaining the improvements seen in simulations. Our results suggest that the reconstructed power spectrum is potentially better modeled as the sum of three different power spectra, each dominating over different wavelength ranges and with different nonlinear damping terms. Finally, we also show that reconstruction reduces the mode-coupling term in the power spectrum, explaining why miscalibrations of the acoustic scale are reduced when one considers the reconstructed power spectrum.

  8. Nonlinear and parallel algorithms for finite element discretizations of the incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Arteaga, Santiago Egido

    1998-12-01

    The steady-state Navier-Stokes equations are of considerable interest because they are used to model numerous common physical phenomena. The applications encountered in practice often involve small viscosities and complicated domain geometries, and they result in challenging problems in spite of the vast attention that has been dedicated to them. In this thesis we examine methods for computing the numerical solution of the primitive variable formulation of the incompressible equations on distributed memory parallel computers. We use the Galerkin method to discretize the differential equations, although most results are stated so that they apply also to stabilized methods. We also reformulate some classical results in a single framework and discuss some issues frequently dismissed in the literature, such as the implementation of pressure space basis and non- homogeneous boundary values. We consider three nonlinear methods: Newton's method, Oseen's (or Picard) iteration, and sequences of Stokes problems. All these iterative nonlinear methods require solving a linear system at every step. Newton's method has quadratic convergence while that of the others is only linear; however, we obtain theoretical bounds showing that Oseen's iteration is more robust, and we confirm it experimentally. In addition, although Oseen's iteration usually requires more iterations than Newton's method, the linear systems it generates tend to be simpler and its overall costs (in CPU time) are lower. The Stokes problems result in linear systems which are easier to solve, but its convergence is much slower, so that it is competitive only for large viscosities. Inexact versions of these methods are studied, and we explain why the best timings are obtained using relatively modest error tolerances in solving the corresponding linear systems. We also present a new damping optimization strategy based on the quadratic nature of the Navier-Stokes equations, which improves the robustness of all the linearization strategies considered and whose computational cost is negligible. The algebraic properties of these systems depend on both the discretization and nonlinear method used. We study in detail the positive definiteness and skewsymmetry of the advection submatrices (essentially, convection-diffusion problems). We propose a discretization based on a new trilinear form for Newton's method. We solve the linear systems using three Krylov subspace methods, GMRES, QMR and TFQMR, and compare the advantages of each. Our emphasis is on parallel algorithms, and so we consider preconditioners suitable for parallel computers such as line variants of the Jacobi and Gauss- Seidel methods, alternating direction implicit methods, and Chebyshev and least squares polynomial preconditioners. These work well for moderate viscosities (moderate Reynolds number). For small viscosities we show that effective parallel solution of the advection subproblem is a critical factor to improve performance. Implementation details on a CM-5 are presented.

  9. Non-Linear Structural Dynamics Characterization using a Scanning Laser Vibrometer

    NASA Technical Reports Server (NTRS)

    Pai, P. F.; Lee, S.-Y.

    2003-01-01

    This paper presents the use of a scanning laser vibrometer and a signal decomposition method to characterize non-linear dynamics of highly flexible structures. A Polytec PI PSV-200 scanning laser vibrometer is used to measure transverse velocities of points on a structure subjected to a harmonic excitation. Velocity profiles at different times are constructed using the measured velocities, and then each velocity profile is decomposed using the first four linear mode shapes and a least-squares curve-fitting method. From the variations of the obtained modal \\ielocities with time we search for possible non-linear phenomena. A cantilevered titanium alloy beam subjected to harmonic base-excitations around the second. third, and fourth natural frequencies are examined in detail. Influences of the fixture mass. gravity. mass centers of mode shapes. and non-linearities are evaluated. Geometrically exact equations governing the planar, harmonic large-amplitude vibrations of beams are solved for operational deflection shapes using the multiple shooting method. Experimental results show the existence of 1:3 and 1:2:3 external and internal resonances. energy transfer from high-frequency modes to the first mode. and amplitude- and phase- modulation among several modes. Moreover, the existence of non-linear normal modes is found to be questionable.

  10. Isotropic-resolution linear-array-based photoacoustic computed tomography through inverse Radon transform

    NASA Astrophysics Data System (ADS)

    Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.

    2015-03-01

    Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.

  11. When linearity prevails over hierarchy in syntax

    PubMed Central

    Willer Gold, Jana; Arsenijević, Boban; Batinić, Mia; Becker, Michael; Čordalija, Nermina; Kresić, Marijana; Leko, Nedžad; Marušič, Franc Lanko; Milićev, Tanja; Milićević, Nataša; Mitić, Ivana; Peti-Stantić, Anita; Stanković, Branimir; Šuligoj, Tina; Tušek, Jelena; Nevins, Andrew

    2018-01-01

    Hierarchical structure has been cherished as a grammatical universal. We use experimental methods to show where linear order is also a relevant syntactic relation. An identical methodology and design were used across six research sites on South Slavic languages. Experimental results show that in certain configurations, grammatical production can in fact favor linear order over hierarchical structure. However, these findings are limited to coordinate structures and distinct from the kind of production errors found with comparable configurations such as “attraction” errors. The results demonstrate that agreement morphology may be computed in a series of steps, one of which is partly independent from syntactic hierarchy. PMID:29288218

  12. Sintering behavior and mechanical properties of zirconia compacts fabricated by uniaxial press forming

    PubMed Central

    Oh, Gye-Jeong; Yun, Kwi-Dug; Lee, Kwang-Min; Lim, Hyun-Pil

    2010-01-01

    PURPOSE The purpose of this study was to compare the linear sintering behavior of presintered zirconia blocks of various densities. The mechanical properties of the resulting sintered zirconia blocks were then analyzed. MATERIALS AND METHODS Three experimental groups of dental zirconia blocks, with a different presintering density each, were designed in the present study. Kavo Everest® ZS blanks (Kavo, Biberach, Germany) were used as a control group. The experimental group blocks were fabricated from commercial yttria-stabilized tetragonal zirconia powder (KZ-3YF (SD) Type A, KCM. Corporation, Nagoya, Japan). The biaxial flexural strengths, microhardnesses, and microstructures of the sintered blocks were then investigated. The linear sintering shrinkages of blocks were calculated and compared. RESULTS Despite their different presintered densities, the sintered blocks of the control and experimental groups showed similar mechanical properties. However, the sintered block had different linear sintering shrinkage rate depending on the density of the presintered block. As the density of the presintered block increased, the linear sintering shrinkage decreased. In the experimental blocks, the three sectioned pieces of each block showed the different linear shrinkage depending on the area. The tops of the experimental blocks showed the lowest linear sintering shrinkage, whereas the bottoms of the experimental blocks showed the highest linear sintering shrinkage. CONCLUSION Within the limitations of this study, the density difference of the presintered zirconia block did not affect the mechanical properties of the sintered zirconia block, but affected the linear sintering shrinkage of the zirconia block. PMID:21165274

  13. Object matching using a locally affine invariant and linear programming techniques.

    PubMed

    Li, Hongsheng; Huang, Xiaolei; He, Lei

    2013-02-01

    In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.

  14. Primal Barrier Methods for Linear Programming

    DTIC Science & Technology

    1989-06-01

    A Theoretical Bound Concerning the difficulties introduced by an ill-conditioned H- 1, Dikin [Dik67] and Stewart [Stew87] show for a full-rank A...Dik67] I. I. Dikin (1967). Iterative solution of problems of linear and quadratic pro- gramming, Doklady Akademii Nauk SSSR, Tom 174, No. 4. [Fia79] A. V

  15. Polarization of skylight in the O(2)A band: effects of aerosol properties.

    PubMed

    Boesche, Eyk; Stammes, Piet; Preusker, Réne; Bennartz, Ralf; Knap, Wouter; Fischer, Juergen

    2008-07-01

    Motivated by several observations of the degree of linear polarization of skylight in the oxygen A (O(2)A) band that do not yet have a quantitative explanation, we analyze the influence of aerosol altitude, microphysics, and optical thickness on the degree of linear polarization of the zenith skylight in the spectral region of the O(2)A band, between 755 to 775 nm. It is shown that the degree of linear polarization inside the O(2)A band is particularly sensitive to aerosol altitude. The sensitivity is strongest for aerosols within the troposphere and depends also on their microphysical properties and optical thickness. The polarization of the O(2)A band can be larger than the polarization of the continuum, which typically occurs for strongly polarizing aerosols in an elevated layer, or smaller, which typically occurs for depolarizing aerosols or cirrus clouds in an elevated layer. We show that in the case of a single aerosol layer in the atmosphere a determination of the aerosol layer altitude may be obtained. Furthermore, we show limitations of the aerosol layer altitude determination in case of multiple aerosol layers. To perform these simulations we developed a fast method for multiple scattering radiative transfer calculations in gaseous absorption bands including polarization. The method is a combination of doubling-adding and k-binning methods. We present an error estimation of this method by comparing with accurate line-by-line radiative transfer simulations. For the Motivated by several observations of the degree of linear polarization of skylight in the oxygen A (O(2)A) band that do not yet have a quantitative explanation, we analyze the influence of aerosol altitude, microphysics, and optical thickness on the degree of linear polarization of the zenith skylight in the spectral region of the O(2)A band, between 755 to 775 nm. It is shown that the degree of linear polarization inside the O(2)A band is particularly sensitive to aerosol altitude. The sensitivity is strongest for aerosols within the troposphere and depends also on their microphysical properties and optical thickness. The polarization of the O(2)A band can be larger than the polarization of the continuum, which typically occurs for strongly polarizing aerosols in an elevated layer, or smaller, which typically occurs for depolarizing aerosols or cirrus clouds in an elevated layer. We show that in the case of a single aerosol layer in the atmosphere a determination of the aerosol layer altitude may be obtained. Furthermore, we show limitations of the aerosol layer altitude determination in case of multiple aerosol layers. To perform these simulations we developed a fast method for multiple scattering radiative transfer calculations in gaseous absorption bands including polarization. The method is a combination of doubling-adding and k-binning methods. We present an error estimation of this method by comparing with accurate line-by-line radiative transfer simulations. For the O(2)A band, the errors in the degree of linear polarization are less than 0.11% for transmitted light, and less than 0.31% for reflected light. band, the errors in the degree of linear polarization are less than 0.11% for transmitted light, and less than 0.31% for reflected light.

  16. Sieve estimation of Cox models with latent structures.

    PubMed

    Cao, Yongxiu; Huang, Jian; Liu, Yanyan; Zhao, Xingqiu

    2016-12-01

    This article considers sieve estimation in the Cox model with an unknown regression structure based on right-censored data. We propose a semiparametric pursuit method to simultaneously identify and estimate linear and nonparametric covariate effects based on B-spline expansions through a penalized group selection method with concave penalties. We show that the estimators of the linear effects and the nonparametric component are consistent. Furthermore, we establish the asymptotic normality of the estimator of the linear effects. To compute the proposed estimators, we develop a modified blockwise majorization descent algorithm that is efficient and easy to implement. Simulation studies demonstrate that the proposed method performs well in finite sample situations. We also use the primary biliary cirrhosis data to illustrate its application. © 2016, The International Biometric Society.

  17. On linearization and preconditioning for radiation diffusion coupled to material thermal conduction equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Tao, E-mail: fengtao2@mail.ustc.edu.cn; Graduate School of China Academy Engineering Physics, Beijing 100083; An, Hengbin, E-mail: an_hengbin@iapcm.ac.cn

    2013-03-01

    Jacobian-free Newton–Krylov (JFNK) method is an effective algorithm for solving large scale nonlinear equations. One of the most important advantages of JFNK method is that there is no necessity to form and store the Jacobian matrix of the nonlinear system when JFNK method is employed. However, an approximation of the Jacobian is needed for the purpose of preconditioning. In this paper, JFNK method is employed to solve a class of non-equilibrium radiation diffusion coupled to material thermal conduction equations, and two preconditioners are designed by linearizing the equations in two methods. Numerical results show that the two preconditioning methods canmore » improve the convergence behavior and efficiency of JFNK method.« less

  18. Conceptual problems in detecting the evolution of dark energy when using distance measurements

    NASA Astrophysics Data System (ADS)

    Bolejko, K.

    2011-01-01

    Context. Dark energy is now one of the most important and topical problems in cosmology. The first step to reveal its nature is to detect the evolution of dark energy or to prove beyond doubt that the cosmological constant is indeed constant. However, in the standard approach to cosmology, the Universe is described by the homogeneous and isotropic Friedmann models. Aims: We aim to show that in the perturbed universe (even if perturbations vanish if averaged over sufficiently large scales) the distance-redshift relation is not the same as in the unperturbed universe. This has a serious consequence when studying the nature of dark energy and, as shown here, can impair the analysis and studies of dark energy. Methods: The analysis is based on two methods: the linear lensing approximation and the non-linear Szekeres Swiss-Cheese model. The inhomogeneity scale is ~50 Mpc, and both models have the same density fluctuations along the line of sight. Results: The comparison between linear and non-linear methods shows that non-linear corrections are not negligible. When inhomogeneities are present the distance changes by several percent. To show how this change influences the measurements of dark energy, ten future observations with 2% uncertainties are generated. It is shown the using the standard methods (i.e. under the assumption of homogeneity) the systematics due to inhomogeneities can distort our analysis, and may lead to a conclusion that dark energy evolves when in fact it is constant (or vice versa). Conclusions: Therefore, if future observations are analysed only within the homogeneous framework then the impact of inhomogeneities (such as voids and superclusters) can be mistaken for evolving dark energy. Since the robust distinction between the evolution and non-evolution of dark energy is the first step to understanding the nature of dark energy a proper handling of inhomogeneities is essential.

  19. An accelerated proximal augmented Lagrangian method and its application in compressive sensing.

    PubMed

    Sun, Min; Liu, Jing

    2017-01-01

    As a first-order method, the augmented Lagrangian method (ALM) is a benchmark solver for linearly constrained convex programming, and in practice some semi-definite proximal terms are often added to its primal variable's subproblem to make it more implementable. In this paper, we propose an accelerated PALM with indefinite proximal regularization (PALM-IPR) for convex programming with linear constraints, which generalizes the proximal terms from semi-definite to indefinite. Under mild assumptions, we establish the worst-case [Formula: see text] convergence rate of PALM-IPR in a non-ergodic sense. Finally, numerical results show that our new method is feasible and efficient for solving compressive sensing.

  20. A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes.

    PubMed

    Vogl, Gregory W; Weiss, Brian A; Donmez, M Alkan

    2015-01-01

    A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a 'sensor box' to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality.

  1. A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes

    PubMed Central

    Vogl, Gregory W.; Weiss, Brian A.; Donmez, M. Alkan

    2017-01-01

    A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a ‘sensor box’ to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality. PMID:28691039

  2. Self-optimizing Pitch Control for Large Scale Wind Turbine Based on ADRC

    NASA Astrophysics Data System (ADS)

    Xia, Anjun; Hu, Guoqing; Li, Zheng; Huang, Dongxiao; Wang, Fengxiang

    2018-01-01

    Since wind turbine is a complex nonlinear and strong coupling system, traditional PI control method can hardly achieve good control performance. A self-optimizing pitch control method based on the active-disturbance-rejection control theory is proposed in this paper. A linear model of the wind turbine is derived by linearizing the aerodynamic torque equation and the dynamic response of wind turbine is transformed into a first-order linear system. An expert system is designed to optimize the amplification coefficient according to the pitch rate and the speed deviation. The purpose of the proposed control method is to regulate the amplification coefficient automatically and keep the variations of pitch rate and rotor speed in proper ranges. Simulation results show that the proposed pitch control method has the ability to modify the amplification coefficient effectively, when it is not suitable, and keep the variations of pitch rate and rotor speed in proper ranges

  3. Homogenizing microwave illumination in thermoacoustic tomography by a linear-to-circular polarizer based on frequency selective surfaces

    NASA Astrophysics Data System (ADS)

    He, Yu; Shen, Yuecheng; Feng, Xiaohua; Liu, Changjun; Wang, Lihong V.

    2017-08-01

    A circularly polarized antenna, providing more homogeneous illumination compared to a linearly polarized antenna, is more suitable for microwave induced thermoacoustic tomography (TAT). The conventional realization of a circular polarization is by using a helical antenna, but it suffers from low efficiency, low power capacity, and limited aperture in TAT systems. Here, we report an implementation of a circularly polarized illumination method in TAT by inserting a single-layer linear-to-circular polarizer based on frequency selective surfaces between a pyramidal horn antenna and an imaging object. The performance of the proposed method was validated by both simulations and experimental imaging of a breast tumor phantom. The results showed that a circular polarization was achieved, and the resultant thermoacoustic signal-to-noise was twice greater than that in the helical antenna case. The proposed method is more desirable in a waveguide-based TAT system than the conventional method.

  4. An oscillatory kernel function method for lifting surfaces in mixed transonic flow

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1974-01-01

    A study was conducted on the use of combined subsonic and supersonic linear theory to obtain economical and yet realistic solutions to unsteady transonic flow problems. With some modification, existing linear theory methods were combined into a single computer program. The method was applied to problems for which measured steady Mach number distributions and unsteady pressure distributions were available. By comparing theory and experiment, the transonic method showed a significant improvement over uniform flow methods. The results also indicated that more exact local Mach number effects and normal shock boundary conditions on the perturbation potential were needed. The validity of these improvements was demonstrated by application to steady flow.

  5. Solution of second order quasi-linear boundary value problems by a wavelet method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lei; Zhou, Youhe; Wang, Jizeng, E-mail: jzwang@lzu.edu.cn

    2015-03-10

    A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can evenmore » reach orders of 5.8.« less

  6. Evaluation of Two Statistical Methods Provides Insights into the Complex Patterns of Alternative Polyadenylation Site Switching

    PubMed Central

    Li, Jie; Li, Rui; You, Leiming; Xu, Anlong; Fu, Yonggui; Huang, Shengfeng

    2015-01-01

    Switching between different alternative polyadenylation (APA) sites plays an important role in the fine tuning of gene expression. New technologies for the execution of 3’-end enriched RNA-seq allow genome-wide detection of the genes that exhibit significant APA site switching between different samples. Here, we show that the independence test gives better results than the linear trend test in detecting APA site-switching events. Further examination suggests that the discrepancy between these two statistical methods arises from complex APA site-switching events that cannot be represented by a simple change of average 3’-UTR length. In theory, the linear trend test is only effective in detecting these simple changes. We classify the switching events into four switching patterns: two simple patterns (3’-UTR shortening and lengthening) and two complex patterns. By comparing the results of the two statistical methods, we show that complex patterns account for 1/4 of all observed switching events that happen between normal and cancerous human breast cell lines. Because simple and complex switching patterns may convey different biological meanings, they merit separate study. We therefore propose to combine both the independence test and the linear trend test in practice. First, the independence test should be used to detect APA site switching; second, the linear trend test should be invoked to identify simple switching events; and third, those complex switching events that pass independence testing but fail linear trend testing can be identified. PMID:25875641

  7. Establishment of a Method for Measuring Antioxidant Capacity in Urine, Based on Oxidation Reduction Potential and Redox Couple I2/KI

    PubMed Central

    Cao, Tinghui; He, Min; Bai, Tianyu

    2016-01-01

    Objectives. To establish a new method for determination of antioxidant capacity of human urine based on the redox couple I2/KI and to evaluate the redox status of healthy and diseased individuals. Methods. The method was based on the linear relationship between oxidation reduction potential (ORP) and logarithm of concentration ratio of I2/KI. ORP of a solution with a known concentration ratio of I2/KI will change when reacted with urine. To determine the accuracy of the method, both vitamin C and urine were reacted separately with I2/KI solution. The new method was compared with the traditional method of iodine titration and then used to measure the antioxidant capacity of urine samples from 30 diabetic patients and 30 healthy subjects. Results. A linear relationship was found between logarithm of concentration ratio of I2/KI and ORP (R 2 = 0.998). Both vitamin C and urine concentration showed a linear relationship with ORP (R 2 = 0.994 and 0.986, resp.). The precision of the method was in the acceptable range and results of two methods had a linear correlation (R 2 = 0.987). Differences in ORP values between diabetic group and control group were statistically significant (P < 0.05). Conclusions. A new method for measuring the antioxidant capacity of clinical urine has been established. PMID:28115919

  8. Mode Identification of High-Amplitude Pressure Waves in Liquid Rocket Engines

    NASA Astrophysics Data System (ADS)

    EBRAHIMI, R.; MAZAHERI, K.; GHAFOURIAN, A.

    2000-01-01

    Identification of existing instability modes from experimental pressure measurements of rocket engines is difficult, specially when steep waves are present. Actual pressure waves are often non-linear and include steep shocks followed by gradual expansions. It is generally believed that interaction of these non-linear waves is difficult to analyze. A method of mode identification is introduced. After presumption of constituent modes, they are superposed by using a standard finite difference scheme for solution of the classical wave equation. Waves are numerically produced at each end of the combustion tube with different wavelengths, amplitudes, and phases with respect to each other. Pressure amplitude histories and phase diagrams along the tube are computed. To determine the validity of the presented method for steep non-linear waves, the Euler equations are numerically solved for non-linear waves, and negligible interactions between these waves are observed. To show the applicability of this method, other's experimental results in which modes were identified are used. Results indicate that this simple method can be used in analyzing complicated pressure signal measurements.

  9. Integrating conventional and inverse representation for face recognition.

    PubMed

    Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David

    2014-10-01

    Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.

  10. [Spectral scatter correction of coal samples based on quasi-linear local weighted method].

    PubMed

    Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng

    2014-07-01

    The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.

  11. Accelerate quasi Monte Carlo method for solving systems of linear algebraic equations through shared memory

    NASA Astrophysics Data System (ADS)

    Lai, Siyan; Xu, Ying; Shao, Bo; Guo, Menghan; Lin, Xiaola

    2017-04-01

    In this paper we study on Monte Carlo method for solving systems of linear algebraic equations (SLAE) based on shared memory. Former research demostrated that GPU can effectively speed up the computations of this issue. Our purpose is to optimize Monte Carlo method simulation on GPUmemoryachritecture specifically. Random numbers are organized to storein shared memory, which aims to accelerate the parallel algorithm. Bank conflicts can be avoided by our Collaborative Thread Arrays(CTA)scheme. The results of experiments show that the shared memory based strategy can speed up the computaions over than 3X at most.

  12. Application of High-Performance Liquid Chromatography Coupled with Linear Ion Trap Quadrupole Orbitrap Mass Spectrometry for Qualitative and Quantitative Assessment of Shejin-Liyan Granule Supplements.

    PubMed

    Gu, Jifeng; Wu, Weijun; Huang, Mengwei; Long, Fen; Liu, Xinhua; Zhu, Yizhun

    2018-04-11

    A method for high-performance liquid chromatography coupled with linear ion trap quadrupole Orbitrap high-resolution mass spectrometry (HPLC-LTQ-Orbitrap MS) was developed and validated for the qualitative and quantitative assessment of Shejin-liyan Granule. According to the fragmentation mechanism and high-resolution MS data, 54 compounds, including fourteen isoflavones, eleven ligands, eight flavonoids, six physalins, six organic acids, four triterpenoid saponins, two xanthones, two alkaloids, and one licorice coumarin, were identified or tentatively characterized. In addition, ten of the representative compounds (matrine, galuteolin, tectoridin, iridin, arctiin, tectorigenin, glycyrrhizic acid, irigenin, arctigenin, and irisflorentin) were quantified using the validated HPLC-LTQ-Orbitrap MS method. The method validation showed a good linearity with coefficients of determination (r²) above 0.9914 for all analytes. The accuracy of the intra- and inter-day variation of the investigated compounds was 95.0-105.0%, and the precision values were less than 4.89%. The mean recoveries and reproducibilities of each analyte were 95.1-104.8%, with relative standard deviations below 4.91%. The method successfully quantified the ten compounds in Shejin-liyan Granule, and the results show that the method is accurate, sensitive, and reliable.

  13. Nonlinear programming extensions to rational function approximations of unsteady aerodynamics

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Adams, William M., Jr.

    1987-01-01

    This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.

  14. Fast correction approach for wavefront sensorless adaptive optics based on a linear phase diversity technique.

    PubMed

    Yue, Dan; Nie, Haitao; Li, Ye; Ying, Changsheng

    2018-03-01

    Wavefront sensorless (WFSless) adaptive optics (AO) systems have been widely studied in recent years. To reach optimum results, such systems require an efficient correction method. This paper presents a fast wavefront correction approach for a WFSless AO system mainly based on the linear phase diversity (PD) technique. The fast closed-loop control algorithm is set up based on the linear relationship between the drive voltage of the deformable mirror (DM) and the far-field images of the system, which is obtained through the linear PD algorithm combined with the influence function of the DM. A large number of phase screens under different turbulence strengths are simulated to test the performance of the proposed method. The numerical simulation results show that the method has fast convergence rate and strong correction ability, a few correction times can achieve good correction results, and can effectively improve the imaging quality of the system while needing fewer measurements of CCD data.

  15. Evolution of inviscid Kelvin-Helmholtz instability from a piecewise linear shear layer

    NASA Astrophysics Data System (ADS)

    Guha, Anirban; Rahmani, Mona; Lawrence, Gregory

    2012-11-01

    Here we study the evolution of 2D, inviscid Kelvin-Helmholtz instability (KH) ensuing from a piecewise linear shear layer. Although KH pertaining to smooth shear layers (eg. Hyperbolic tangent profile) has been thorough investigated in the past, very little is known about KH resulting from sharp shear layers. Pozrikidis and Higdon (1985) have shown that piecewise shear layer evolves into elliptical vortex patches. This non-linear state is dramatically different from the well known spiral-billow structure of KH. In fact, there is a little acknowledgement that elliptical vortex patches can represent non-linear KH. In this work, we show how such patches evolve through the interaction of vorticity waves. Our work is based on two types of computational methods (i) Contour Dynamics: a boundary-element method which tracks the evolution of the contour of a vortex patch using Lagrangian marker points, and (ii) Direct Numerical Simulation (DNS): an Eulerian pseudo-spectral method heavily used in studying hydrodynamic instability and turbulence.

  16. The whole number axis integer linear transformation reversible information hiding algorithm on wavelet domain

    NASA Astrophysics Data System (ADS)

    Jiang, Zhuo; Xie, Chengjun

    2013-12-01

    This paper improved the algorithm of reversible integer linear transform on finite interval [0,255], which can realize reversible integer linear transform in whole number axis shielding data LSB (least significant bit). Firstly, this method use integer wavelet transformation based on lifting scheme to transform the original image, and select the transformed high frequency areas as information hiding area, meanwhile transform the high frequency coefficients blocks in integer linear way and embed the secret information in LSB of each coefficient, then information hiding by embedding the opposite steps. To extract data bits and recover the host image, a similar reverse procedure can be conducted, and the original host image can be lossless recovered. The simulation experimental results show that this method has good secrecy and concealment, after conducted the CDF (m, n) and DD (m, n) series of wavelet transformed. This method can be applied to information security domain, such as medicine, law and military.

  17. Analysis and comparison of end effects in linear switched reluctance and hybrid motors

    NASA Astrophysics Data System (ADS)

    Barhoumi, El Manaa; Abo-Khalil, Ahmed Galal; Berrouche, Youcef; Wurtz, Frederic

    2017-03-01

    This paper presents and discusses the longitudinal and transversal end effects which affects the propulsive force of linear motors. Generally, the modeling of linear machine considers the forces distortion due to the specific geometry of linear actuators. The insertion of permanent magnets on the stator allows improving the propulsive force produced by switched reluctance linear motors. Also, the inserted permanent magnets in the hybrid structure allow reducing considerably the ends effects observed in linear motors. The analysis was conducted using 2D and 3D finite elements method. The permanent magnet reinforces the flux produced by the winding and reorients it which allows modifying the impact of end effects. Presented simulations and discussions show the importance of this study to characterize the end effects in two different linear motors.

  18. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  19. Influence of tungsten fiber’s slow drift on the measurement of G with angular acceleration method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Jie; Wu, Wei-Huang; Zhan, Wen-Ze

    In the measurement of the gravitational constant G with angular acceleration method, the equilibrium position of torsion pendulum with tungsten fiber undergoes a linear slow drift, which results in a quadratic slow drift on the angular velocity of the torsion balance turntable under feedback control unit. The accurate amplitude determination of the useful angular acceleration signal with known frequency is biased by the linear slow drift and the coupling effect of the drifting equilibrium position and the room fixed gravitational background signal. We calculate the influences of the linear slow drift and the complex coupling effect on the value ofmore » G, respectively. The result shows that the bias of the linear slow drift on G is 7 ppm, and the influence of the coupling effect is less than 1 ppm.« less

  20. Influence of tungsten fiber's slow drift on the measurement of G with angular acceleration method.

    PubMed

    Luo, Jie; Wu, Wei-Huang; Xue, Chao; Shao, Cheng-Gang; Zhan, Wen-Ze; Wu, Jun-Fei; Milyukov, Vadim

    2016-08-01

    In the measurement of the gravitational constant G with angular acceleration method, the equilibrium position of torsion pendulum with tungsten fiber undergoes a linear slow drift, which results in a quadratic slow drift on the angular velocity of the torsion balance turntable under feedback control unit. The accurate amplitude determination of the useful angular acceleration signal with known frequency is biased by the linear slow drift and the coupling effect of the drifting equilibrium position and the room fixed gravitational background signal. We calculate the influences of the linear slow drift and the complex coupling effect on the value of G, respectively. The result shows that the bias of the linear slow drift on G is 7 ppm, and the influence of the coupling effect is less than 1 ppm.

  1. Influence of tungsten fiber's slow drift on the measurement of G with angular acceleration method

    NASA Astrophysics Data System (ADS)

    Luo, Jie; Wu, Wei-Huang; Xue, Chao; Shao, Cheng-Gang; Zhan, Wen-Ze; Wu, Jun-Fei; Milyukov, Vadim

    2016-08-01

    In the measurement of the gravitational constant G with angular acceleration method, the equilibrium position of torsion pendulum with tungsten fiber undergoes a linear slow drift, which results in a quadratic slow drift on the angular velocity of the torsion balance turntable under feedback control unit. The accurate amplitude determination of the useful angular acceleration signal with known frequency is biased by the linear slow drift and the coupling effect of the drifting equilibrium position and the room fixed gravitational background signal. We calculate the influences of the linear slow drift and the complex coupling effect on the value of G, respectively. The result shows that the bias of the linear slow drift on G is 7 ppm, and the influence of the coupling effect is less than 1 ppm.

  2. A phase match based frequency estimation method for sinusoidal signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao

    2015-04-01

    Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.

  3. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-06-14

    It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.

  4. Total dose bias dependency and ELDRS effects in bipolar linear devices

    NASA Technical Reports Server (NTRS)

    Yui, C. C.; McClure, S. S.; Rex, B. G.; Lehman, J. M.; Minto, T. D.; Wiedeman, M.

    2002-01-01

    Total dose tests of several bipolar linear devices show sensitivity to both dose rate and bias during exposure. All devices exhibited Enhanced Low Dose Rate Sensitivity (ELDRS). An accelerated ELDRS test method for three different devices demonstrate results similar to tests at low dose rate. Behavior and critical parameters from these tests are compared and discussed.

  5. Solution of the Schrodinger Equation for a Diatomic Oscillator Using Linear Algebra: An Undergraduate Computational Experiment

    ERIC Educational Resources Information Center

    Gasyna, Zbigniew L.

    2008-01-01

    Computational experiment is proposed in which a linear algebra method is applied to the solution of the Schrodinger equation for a diatomic oscillator. Calculations of the vibration-rotation spectrum for the HCl molecule are presented and the results show excellent agreement with experimental data. (Contains 1 table and 1 figure.)

  6. Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Chang, K. C.

    2005-05-01

    Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.

  7. Determining polarizable force fields with electrostatic potentials from quantum mechanical linear response theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hao; Yang, Weitao, E-mail: weitao.yang@duke.edu; Department of Physics, Duke University, Durham, North Carolina 27708

    We developed a new method to calculate the atomic polarizabilities by fitting to the electrostatic potentials (ESPs) obtained from quantum mechanical (QM) calculations within the linear response theory. This parallels the conventional approach of fitting atomic charges based on electrostatic potentials from the electron density. Our ESP fitting is combined with the induced dipole model under the perturbation of uniform external electric fields of all orientations. QM calculations for the linear response to the external electric fields are used as input, fully consistent with the induced dipole model, which itself is a linear response model. The orientation of the uniformmore » external electric fields is integrated in all directions. The integration of orientation and QM linear response calculations together makes the fitting results independent of the orientations and magnitudes of the uniform external electric fields applied. Another advantage of our method is that QM calculation is only needed once, in contrast to the conventional approach, where many QM calculations are needed for many different applied electric fields. The molecular polarizabilities obtained from our method show comparable accuracy with those from fitting directly to the experimental or theoretical molecular polarizabilities. Since ESP is directly fitted, atomic polarizabilities obtained from our method are expected to reproduce the electrostatic interactions better. Our method was used to calculate both transferable atomic polarizabilities for polarizable molecular mechanics’ force fields and nontransferable molecule-specific atomic polarizabilities.« less

  8. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  9. Locally linear regression for pose-invariant face recognition.

    PubMed

    Chai, Xiujuan; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2007-07-01

    The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.

  10. One step linear reconstruction method for continuous wave diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Ukhrowiyah, N.; Yasin, M.

    2017-09-01

    The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.

  11. Validated modified Lycopodium spore method development for standardisation of ingredients of an ayurvedic powdered formulation Shatavaryadi churna.

    PubMed

    Kumar, Puspendra; Jha, Shivesh; Naved, Tanveer

    2013-01-01

    Validated modified lycopodium spore method has been developed for simple and rapid quantification of herbal powdered drugs. Lycopodium spore method was performed on ingredients of Shatavaryadi churna, an ayurvedic formulation used as immunomodulator, galactagogue, aphrodisiac and rejuvenator. Estimation of diagnostic characters of each ingredient of Shatavaryadi churna individually was carried out. Microscopic determination, counting of identifying number, measurement of area, length and breadth of identifying characters were performed using Leica DMLS-2 microscope. The method was validated for intraday precision, linearity, specificity, repeatability, accuracy and system suitability, respectively. The method is simple, precise, sensitive, and accurate, and can be used for routine standardisation of raw materials of herbal drugs. This method gives the ratio of individual ingredients in the powdered drug so that any adulteration of genuine drug with its adulterant can be found out. The method shows very good linearity value between 0.988-0.999 for number of identifying character and area of identifying character. Percentage purity of the sample drug can be determined by using the linear equation of standard genuine drug.

  12. UV Spectrophotometric Method for Estimation of Polypeptide-K in Bulk and Tablet Dosage Forms

    NASA Astrophysics Data System (ADS)

    Kaur, P.; Singh, S. Kumar; Gulati, M.; Vaidya, Y.

    2016-01-01

    An analytical method for estimation of polypeptide-k using UV spectrophotometry has been developed and validated for bulk as well as tablet dosage form. The developed method was validated for linearity, precision, accuracy, specificity, robustness, detection, and quantitation limits. The method has shown good linearity over the range from 100.0 to 300.0 μg/ml with a correlation coefficient of 0.9943. The percentage recovery of 99.88% showed that the method was highly accurate. The precision demonstrated relative standard deviation of less than 2.0%. The LOD and LOQ of the method were found to be 4.4 and 13.33, respectively. The study established that the proposed method is reliable, specific, reproducible, and cost-effective for the determination of polypeptide-k.

  13. A BiCGStab2 variant of the IDR(s) method for solving linear equations

    NASA Astrophysics Data System (ADS)

    Abe, Kuniyoshi; Sleijpen, Gerard L. G.

    2012-09-01

    The hybrid Bi-Conjugate Gradient (Bi-CG) methods, such as the BiCG STABilized (BiCGSTAB), BiCGstab(l), BiCGStab2 and BiCG×MR2 methods are well-known solvers for solving a linear equation with a nonsymmetric matrix. The Induced Dimension Reduction (IDR)(s) method has recently been proposed, and it has been reported that IDR(s) is often more effective than the hybrid BiCG methods. IDR(s) combining the stabilization polynomial of BiCGstab(l) has been designed to improve the convergence of the original IDR(s) method. We therefore propose IDR(s) combining the stabilization polynomial of BiCGStab2. Numerical experiments show that our proposed variant of IDR(s) is more effective than the original IDR(s) and BiCGStab2 methods.

  14. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043

  15. Linear network representation of multistate models of transport.

    PubMed Central

    Sandblom, J; Ring, A; Eisenman, G

    1982-01-01

    By introducing external driving forces in rate-theory models of transport we show how the Eyring rate equations can be transformed into Ohm's law with potentials that obey Kirchhoff's second law. From such a formalism the state diagram of a multioccupancy multicomponent system can be directly converted into linear network with resistors connecting nodal (branch) points and with capacitances connecting each nodal point with a reference point. The external forces appear as emf or current generators in the network. This theory allows the algebraic methods of linear network theory to be used in solving the flux equations for multistate models and is particularly useful for making proper simplifying approximation in models of complex membrane structure. Some general properties of linear network representation are also deduced. It is shown, for instance, that Maxwell's reciprocity relationships of linear networks lead directly to Onsager's relationships in the near equilibrium region. Finally, as an example of the procedure, the equivalent circuit method is used to solve the equations for a few transport models. PMID:7093425

  16. Optimization of cutting parameters for machining time in turning process

    NASA Astrophysics Data System (ADS)

    Mavliutov, A. R.; Zlotnikov, E. G.

    2018-03-01

    This paper describes the most effective methods for nonlinear constraint optimization of cutting parameters in the turning process. Among them are Linearization Programming Method with Dual-Simplex algorithm, Interior Point method, and Augmented Lagrangian Genetic Algorithm (ALGA). Every each of them is tested on an actual example – the minimization of production rate in turning process. The computation was conducted in the MATLAB environment. The comparative results obtained from the application of these methods show: The optimal value of the linearized objective and the original function are the same. ALGA gives sufficiently accurate values, however, when the algorithm uses the Hybrid function with Interior Point algorithm, the resulted values have the maximal accuracy.

  17. MTF measurement of LCDs by a linear CCD imager: I. Monochrome case

    NASA Astrophysics Data System (ADS)

    Kim, Tae-hee; Choe, O. S.; Lee, Yun Woo; Cho, Hyun-Mo; Lee, In Won

    1997-11-01

    We construct the modulation transfer function (MTF) measurement system of a LCD using a linear charge-coupled device (CCD) imager. The MTF used in optical system can not describe in the effect of both resolution and contrast on the image quality of display. Thus we present the new measurement method based on the transmission property of a LCD. While controlling contrast and brightness levels, the MTF is measured. From the result, we show that the method is useful for describing of the image quality. A ne measurement method and its condition are described. To demonstrate validity, the method is applied for comparison of the performance of two different LCDs.

  18. Polarization ratio property and material classification method in passive millimeter wave polarimetric imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Yayun; Qi, Bo; Liu, Siyuan; Hu, Fei; Gui, Liangqi; Peng, Xiaohui

    2016-10-01

    Polarimetric measurements can provide additional information as compared to unpolarized ones. In this paper, linear polarization ratio (LPR) is created to be a feature discriminator. The LPR properties of several materials are investigated using Fresnel theory. The theoretical results show that LPR is sensitive to the material type (metal or dielectric). Then a linear polarization ratio-based (LPR-based) method is presented to distinguish between metal and dielectric materials. In order to apply this method to practical applications, the optimal range of incident angle have been discussed. The typical outdoor experiments including various objects such as aluminum plate, grass, concrete, soil and wood, have been conducted to validate the presented classification method.

  19. Linear and nonlinear regression techniques for simultaneous and proportional myoelectric control.

    PubMed

    Hahne, J M; Biessmann, F; Jiang, N; Rehbaum, H; Farina, D; Meinecke, F C; Muller, K-R; Parra, L C

    2014-03-01

    In recent years the number of active controllable joints in electrically powered hand-prostheses has increased significantly. However, the control strategies for these devices in current clinical use are inadequate as they require separate and sequential control of each degree-of-freedom (DoF). In this study we systematically compare linear and nonlinear regression techniques for an independent, simultaneous and proportional myoelectric control of wrist movements with two DoF. These techniques include linear regression, mixture of linear experts (ME), multilayer-perceptron, and kernel ridge regression (KRR). They are investigated offline with electro-myographic signals acquired from ten able-bodied subjects and one person with congenital upper limb deficiency. The control accuracy is reported as a function of the number of electrodes and the amount and diversity of training data providing guidance for the requirements in clinical practice. The results showed that KRR, a nonparametric statistical learning method, outperformed the other methods. However, simple transformations in the feature space could linearize the problem, so that linear models could achieve similar performance as KRR at much lower computational costs. Especially ME, a physiologically inspired extension of linear regression represents a promising candidate for the next generation of prosthetic devices.

  20. Hyperspectral and multispectral data fusion based on linear-quadratic nonnegative matrix factorization

    NASA Astrophysics Data System (ADS)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2017-04-01

    This paper proposes three multisharpening approaches to enhance the spatial resolution of urban hyperspectral remote sensing images. These approaches, related to linear-quadratic spectral unmixing techniques, use a linear-quadratic nonnegative matrix factorization (NMF) multiplicative algorithm. These methods begin by unmixing the observable high-spectral/low-spatial resolution hyperspectral and high-spatial/low-spectral resolution multispectral images. The obtained high-spectral/high-spatial resolution features are then recombined, according to the linear-quadratic mixing model, to obtain an unobservable multisharpened high-spectral/high-spatial resolution hyperspectral image. In the first designed approach, hyperspectral and multispectral variables are independently optimized, once they have been coherently initialized. These variables are alternately updated in the second designed approach. In the third approach, the considered hyperspectral and multispectral variables are jointly updated. Experiments, using synthetic and real data, are conducted to assess the efficiency, in spatial and spectral domains, of the designed approaches and of linear NMF-based approaches from the literature. Experimental results show that the designed methods globally yield very satisfactory spectral and spatial fidelities for the multisharpened hyperspectral data. They also prove that these methods significantly outperform the used literature approaches.

  1. Dual linear structured support vector machine tracking method via scale correlation filter

    NASA Astrophysics Data System (ADS)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  2. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  3. Sparse 4D TomoSAR imaging in the presence of non-linear deformation

    NASA Astrophysics Data System (ADS)

    Khwaja, Ahmed Shaharyar; ćetin, Müjdat

    2018-04-01

    In this paper, we present a sparse four-dimensional tomographic synthetic aperture radar (4D TomoSAR) imaging scheme that can estimate elevation and linear as well as non-linear seasonal deformation rates of scatterers using the interferometric phase. Unlike existing sparse processing techniques that use fixed dictionaries based on a linear deformation model, we use a variable dictionary for the non-linear deformation in the form of seasonal sinusoidal deformation, in addition to the fixed dictionary for the linear deformation. We estimate the amplitude of the sinusoidal deformation using an optimization method and create the variable dictionary using the estimated amplitude. We show preliminary results using simulated data that demonstrate the soundness of our proposed technique for sparse 4D TomoSAR imaging in the presence of non-linear deformation.

  4. On the derivation of linear irreversible thermodynamics for classical fluids

    PubMed Central

    Theodosopulu, M.; Grecos, A.; Prigogine, I.

    1978-01-01

    We consider the microscopic derivation of the linearized hydrodynamic equations for an arbitrary simple fluid. Our discussion is based on the concept of hydrodynamical modes, and use is made of the ideas and methods of the theory of subdynamics. We also show that this analysis leads to the Gibbs relation for the entropy of the system. PMID:16592516

  5. [Baseline correction of spectrum for the inversion of chlorophyll-a concentration in the turbidity water].

    PubMed

    Wei, Yu-Chun; Wang, Guo-Xiang; Cheng, Chun-Mei; Zhang, Jing; Sun, Xiao-Peng

    2012-09-01

    Suspended particle material is the main factor affecting remote sensing inversion of chlorophyll-a concentration (Chla) in turbidity water. According to the optical property of suspended material in water, the present paper proposed a linear baseline correction method to weaken the suspended particle contribution in the spectrum above turbidity water surface. The linear baseline was defined as the connecting line of reflectance from 450 to 750 nm, and baseline correction is that spectrum reflectance subtracts the baseline. Analysis result of field data in situ of Meiliangwan, Taihu Lake in April, 2011 and March, 2010 shows that spectrum linear baseline correction can improve the inversion precision of Chl a and produce the better model diagnoses. As the data in March, 2010, RMSE of band ratio model built by original spectrum is 4.11 mg x m(-3), and that built by spectrum baseline correction is 3.58 mg x m(-3). Meanwhile, residual distribution and homoscedasticity in the model built by baseline correction spectrum is improved obviously. The model RMSE of April, 2011 shows the similar result. The authors suggest that using linear baseline correction as the spectrum processing method to improve Chla inversion accuracy in turbidity water without algae bloom.

  6. Gait event detection using linear accelerometers or angular velocity transducers in able-bodied and spinal-cord injured individuals.

    PubMed

    Jasiewicz, Jan M; Allum, John H J; Middleton, James W; Barriskill, Andrew; Condie, Peter; Purcell, Brendan; Li, Raymond Che Tin

    2006-12-01

    We report on three different methods of gait event detection (toe-off and heel strike) using miniature linear accelerometers and angular velocity transducers in comparison to using standard pressure-sensitive foot switches. Detection was performed with normal and spinal-cord injured subjects. The detection of end contact (EC), normally toe-off, and initial contact (IC) normally, heel strike was based on either foot linear accelerations or foot sagittal angular velocity or shank sagittal angular velocity. The results showed that all three methods were as accurate as foot switches in estimating times of IC and EC for normal gait patterns. In spinal-cord injured subjects, shank angular velocity was significantly less accurate (p<0.02). We conclude that detection based on foot linear accelerations or foot angular velocity can correctly identify the timing of IC and EC events in both normal and spinal-cord injured subjects.

  7. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  8. Three-dimensional instabilities of natural convection between two differentially heated vertical plates: Linear and nonlinear complementary approaches

    NASA Astrophysics Data System (ADS)

    Gao, Zhenlan; Podvin, Berengere; Sergent, Anne; Xin, Shihe; Chergui, Jalel

    2018-05-01

    The transition to the chaos of the air flow between two vertical plates maintained at different temperatures is studied in the Boussinesq approximation. After the first bifurcation at critical Rayleigh number Rac, the flow consists of two-dimensional (2D) corotating rolls. The stability of the 2D rolls is examined, confronting linear predictions with nonlinear integration. In all cases the 2D rolls are destabilized in the spanwise direction. Efficient linear stability analysis based on an Arnoldi method shows competition between two eigenmodes, corresponding to different spanwise wavelengths and different types of roll distortion. Nonlinear integration shows that the lower-wave-number mode is always dominant. A partial route to chaos is established through the nonlinear simulations. The flow becomes temporally chaotic for Ra =1.05 Rac , but remains characterized by the spatial patterns identified by linear stability analysis. This highlights the complementary role of linear stability analysis and nonlinear simulation.

  9. Construction of Protograph LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  10. Translocation of double strand DNA into a biological nanopore

    NASA Astrophysics Data System (ADS)

    Chatkaew, Sunita; Mlayeh, Lamia; Leonetti, Marc; Homble, Fabrice

    2009-03-01

    Translocation of double strand DNA across a unique mitochondrial biological nanopore (VDAC) is observed by an electrophysiological method. Characteristics of opened and sub-conductance states of VDAC are studied. When the applied electric potential is beyond ± 20 mV, VDAC transits to a sub-conductance state. Plasmids (circular double strand DNA) with a diameter greater than that of the channel shows the current reduction into the channel during the interaction but the state with zero-current is not observed. On the contrary, the interaction of linear double strand DNA with the channel shows the current reduction along with the zero-current state. These show the passages of linear double strand DNA across the channel and the electrostatic effect due to the surface charges of double strand DNA and channel for circular and linear double strand DNA.

  11. Implementing general quantum measurements on linear optical and solid-state qubits

    NASA Astrophysics Data System (ADS)

    Ota, Yukihiro; Ashhab, Sahel; Nori, Franco

    2013-03-01

    We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.

  12. Quantum State Tomography via Linear Regression Estimation

    PubMed Central

    Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan

    2013-01-01

    A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519

  13. Retrieval of all effective susceptibilities in nonlinear metamaterials

    NASA Astrophysics Data System (ADS)

    Larouche, Stéphane; Radisic, Vesna

    2018-04-01

    Electromagnetic metamaterials offer a great avenue to engineer and amplify the nonlinear response of materials. Their electric, magnetic, and magnetoelectric linear and nonlinear response are related to their structure, providing unprecedented liberty to control those properties. Both the linear and the nonlinear properties of metamaterials are typically anisotropic. While the methods to retrieve the effective linear properties are well established, existing nonlinear retrieval methods have serious limitations. In this work, we generalize a nonlinear transfer matrix approach to account for all nonlinear susceptibility terms and show how to use this approach to retrieve all effective nonlinear susceptibilities of metamaterial elements. The approach is demonstrated using sum frequency generation, but can be applied to other second-order or higher-order processes.

  14. Deflection angle detecting system for the large-angle and high-linearity fast steering mirror using quadrant detector

    NASA Astrophysics Data System (ADS)

    Ni, Yingxue; Wu, Jiabin; San, Xiaogang; Gao, Shijie; Ding, Shaohang; Wang, Jing; Wang, Tao

    2018-02-01

    A deflection angle detecting system (DADS) using a quadrant detector (QD) is developed to achieve the large deflection angle and high linearity for the fast steering mirror (FSM). The mathematical model of the DADS is established by analyzing the principle of position detecting and error characteristics of the QD. Based on this mathematical model, the method of optimizing deflection angle and linearity of FSM is demonstrated, which is proved feasible by simulation and experimental results. Finally, a QD-based FSM is designed and tested. The results show that it achieves 0.72% nonlinearity, ±2.0 deg deflection angle, and 1.11-μrad resolution. Therefore, the application of this method will be beneficial to design the FSM.

  15. Bayesian dynamical systems modelling in the social sciences.

    PubMed

    Ranganathan, Shyam; Spaiser, Viktoria; Mann, Richard P; Sumpter, David J T

    2014-01-01

    Data arising from social systems is often highly complex, involving non-linear relationships between the macro-level variables that characterize these systems. We present a method for analyzing this type of longitudinal or panel data using differential equations. We identify the best non-linear functions that capture interactions between variables, employing Bayes factor to decide how many interaction terms should be included in the model. This method punishes overly complicated models and identifies models with the most explanatory power. We illustrate our approach on the classic example of relating democracy and economic growth, identifying non-linear relationships between these two variables. We show how multiple variables and variable lags can be accounted for and provide a toolbox in R to implement our approach.

  16. Linearity optimizations of analog ring resonator modulators through bias voltage adjustments

    NASA Astrophysics Data System (ADS)

    Hosseinzadeh, Arash; Middlebrook, Christopher T.

    2018-03-01

    The linearity of ring resonator modulator (RRM) in microwave photonic links is studied in terms of instantaneous bandwidth, fabrication tolerances, and operational bandwidth. A proposed bias voltage adjustment method is shown to maximize spur-free dynamic range (SFDR) at instantaneous bandwidths required by microwave photonic link (MPL) applications while also mitigating RRM fabrication tolerances effects. The proposed bias voltage adjustment method shows RRM SFDR improvement of ∼5.8 dB versus common Mach-Zehnder modulators at 500 MHz instantaneous bandwidth. Analyzing operational bandwidth effects on SFDR shows RRMs can be promising electro-optic modulators for MPL applications which require high operational frequencies while in a limited bandwidth such as radio-over-fiber 60 GHz wireless network access.

  17. Adaptive nonlinear control for autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Black, William S.

    We present the background and motivation for ground vehicle autonomy, and focus on uses for space-exploration. Using a simple design example of an autonomous ground vehicle we derive the equations of motion. After providing the mathematical background for nonlinear systems and control we present two common methods for exactly linearizing nonlinear systems, feedback linearization and backstepping. We use these in combination with three adaptive control methods: model reference adaptive control, adaptive sliding mode control, and extremum-seeking model reference adaptive control. We show the performances of each combination through several simulation results. We then consider disturbances in the system, and design nonlinear disturbance observers for both single-input-single-output and multi-input-multi-output systems. Finally, we show the performance of these observers with simulation results.

  18. Microwave imaging by three-dimensional Born linearization of electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Caorsi, S.; Gragnani, G. L.; Pastorino, M.

    1990-11-01

    An approach to microwave imaging is proposed that uses a three-dimensional vectorial form of the Born approximation to linearize the equation of electromagnetic scattering. The inverse scattering problem is numerically solved for three-dimensional geometries by means of the moment method. A pseudoinversion algorithm is adopted to overcome ill conditioning. Results show that the method is well suited for qualitative imaging purposes, while its capability for exactly reconstructing the complex dielectric permittivity is affected by the limitations inherent in the Born approximation and in ill conditioning.

  19. Comparison of Conjugate Gradient Density Matrix Search and Chebyshev Expansion Methods for Avoiding Diagonalization in Large-Scale Electronic Structure Calculations

    NASA Technical Reports Server (NTRS)

    Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.

    1998-01-01

    We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.

  20. A Large-Particle Monte Carlo Code for Simulating Non-Linear High-Energy Processes Near Compact Objects

    NASA Technical Reports Server (NTRS)

    Stern, Boris E.; Svensson, Roland; Begelman, Mitchell C.; Sikora, Marek

    1995-01-01

    High-energy radiation processes in compact cosmic objects are often expected to have a strongly non-linear behavior. Such behavior is shown, for example, by electron-positron pair cascades and the time evolution of relativistic proton distributions in dense radiation fields. Three independent techniques have been developed to simulate these non-linear problems: the kinetic equation approach; the phase-space density (PSD) Monte Carlo method; and the large-particle (LP) Monte Carlo method. In this paper, we present the latest version of the LP method and compare it with the other methods. The efficiency of the method in treating geometrically complex problems is illustrated by showing results of simulations of 1D, 2D and 3D systems. The method is shown to be powerful enough to treat non-spherical geometries, including such effects as bulk motion of the background plasma, reflection of radiation from cold matter, and anisotropic distributions of radiating particles. It can therefore be applied to simulate high-energy processes in such astrophysical systems as accretion discs with coronae, relativistic jets, pulsar magnetospheres and gamma-ray bursts.

  1. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE PAGES

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2015-12-01

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  2. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  3. Simultaneous quantification of coumarins, flavonoids and limonoids in Fructus Citri Sarcodactylis by high performance liquid chromatography coupled with diode array detector.

    PubMed

    Chu, Jun; Li, Song-Lin; Yin, Zhi-Qi; Ye, Wen-Cai; Zhang, Qing-Wen

    2012-07-01

    A high performance liquid chromatography coupled with diode array detector (HPLC-DAD) method was developed for simultaneous quantification of eleven major bioactive components including six coumarins, three flavonoids and two limonoids in Fructus Citri Sarcodactylis. The analysis was performed on a Cosmosil 5 C(18)-MS-II column (4.6 mm × 250 mm, 5 μm) with water-acetonitrile gradient elution. The method was validated in terms of linearity, sensitivity, precision, stability and accuracy. It was found that the calibration curves for all analytes showed good linearity (R(2)>0.9993) within the test ranges. The overall limit of detection (LOD) and limit of quantification (LOQ) were less than 3.0 and 10.2 ng. The relative standard deviations (RSDs) for intra- and inter-day repeatability were not more than 4.99% and 4.92%, respectively. The sample was stable for at least 48 h. The spike recoveries of eleven components were 95.1-104.9%. The established method was successfully applied to determine eleven components in three samples from different locations. The results showed that the newly developed HPLC-DAD method was linear, sensitive, precise and accurate, and could be used for quality control of Fructus Citri Sarcodactylis. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples.

    PubMed

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-05

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Non-linear molecular pattern classification using molecular beacons with multiple targets.

    PubMed

    Lee, In-Hee; Lee, Seung Hwan; Park, Tai Hyun; Zhang, Byoung-Tak

    2013-12-01

    In vitro pattern classification has been highlighted as an important future application of DNA computing. Previous work has demonstrated the feasibility of linear classifiers using DNA-based molecular computing. However, complex tasks require non-linear classification capability. Here we design a molecular beacon that can interact with multiple targets and experimentally shows that its fluorescent signals form a complex radial-basis function, enabling it to be used as a building block for non-linear molecular classification in vitro. The proposed method was successfully applied to solving artificial and real-world classification problems: XOR and microRNA expression patterns. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Coupling and decoupling of the accelerating units for pulsed synchronous linear accelerator

    NASA Astrophysics Data System (ADS)

    Shen, Yi; Liu, Yi; Ye, Mao; Zhang, Huang; Wang, Wei; Xia, Liansheng; Wang, Zhiwen; Yang, Chao; Shi, Jinshui; Zhang, Linwen; Deng, Jianjun

    2017-12-01

    A pulsed synchronous linear accelerator (PSLA), based on the solid-state pulse forming line, photoconductive semiconductor switch, and high gradient insulator technologies, is a novel linear accelerator. During the prototype PSLA commissioning, the energy gain of proton beams was found to be much lower than expected. In this paper, the degradation of the energy gain is explained by the circuit and cavity coupling effect of the accelerating units. The coupling effects of accelerating units are studied, and the circuit topologies of these two kinds of coupling effects are presented. Two methods utilizing inductance and membrane isolations, respectively, are proposed to reduce the circuit coupling effects. The effectiveness of the membrane isolation method is also supported by simulations. The decoupling efficiency of the metal drift tube is also researched. We carried out the experiments on circuit decoupling of the multiple accelerating cavity. The result shows that both circuit decoupling methods could increase the normalized voltage.

  7. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  8. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less

  9. Square-wave stripping voltammetric determination of caffeic acid on electrochemically reduced graphene oxide-Nafion composite film.

    PubMed

    Filik, Hayati; Çetintaş, Gamze; Avan, Asiye Aslıhan; Aydar, Sevda; Koç, Serkan Naci; Boz, İsmail

    2013-11-15

    An electrochemical sensor composed of Nafion-graphene nanocomposite film for the voltammetric determination of caffeic acid (CA) was studied. A Nafion graphene oxide-modified glassy carbon electrode was fabricated by a simple drop-casting method and then graphene oxide was electrochemically reduced over the glassy carbon electrode. The electrochemical analysis method was based on the adsorption of caffeic acid on Nafion/ER-GO/GCE and then the oxidation of CA during the stripping step. The resulting electrode showed an excellent electrocatalytical response to the oxidation of caffeic acid (CA). The electrochemistry of caffeic acid on Nafion/ER-GO modified glassy carbon electrodes (GCEs) were studied by cyclic voltammetry and square-wave adsorption stripping voltammetry (SW-AdSV). At optimized test conditions, the calibration curve for CA showed two linear segments: the first linear segment increased from 0.1 to 1.5 and second linear segment increased up to 10 µM. The detection limit was determined as 9.1×10(-8) mol L(-1) using SW-AdSV. Finally, the proposed method was successfully used to determine CA in white wine samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Q-Method Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; Ainscough, Thomas; Christian, John; Spanos, Pol D.

    2012-01-01

    A new algorithm is proposed that smoothly integrates non-linear estimation of the attitude quaternion using Davenport s q-method and estimation of non-attitude states through an extended Kalman filter. The new method is compared to a similar existing algorithm showing its similarities and differences. The validity of the proposed approach is confirmed through numerical simulations.

  11. Optimal Stratification of Item Pools in a-Stratified Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Chang, Hua-Hua; van der Linden, Wim J.

    2003-01-01

    Developed a method based on 0-1 linear programming to stratify an item pool optimally for use in alpha-stratified adaptive testing. Applied the method to a previous item pool from the computerized adaptive test of the Graduate Record Examinations. Results show the new method performs well in practical situations. (SLD)

  12. Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.

    PubMed

    Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K

    2011-01-01

    We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.

  13. [Application of ordinary Kriging method in entomologic ecology].

    PubMed

    Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong

    2003-01-01

    Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.

  14. Tuning graphitic oxide for initiator- and metal-free aerobic epoxidation of linear alkenes

    NASA Astrophysics Data System (ADS)

    Pattisson, Samuel; Nowicka, Ewa; Gupta, Upendra N.; Shaw, Greg; Jenkins, Robert L.; Morgan, David J.; Knight, David W.; Hutchings, Graham J.

    2016-09-01

    Graphitic oxide has potential as a carbocatalyst for a wide range of reactions. Interest in this material has risen enormously due to it being a precursor to graphene via the chemical oxidation of graphite. Despite some studies suggesting that the chosen method of graphite oxidation can influence the physical properties of the graphitic oxide, the preparation method and extent of oxidation remain unresolved for catalytic applications. Here we show that tuning the graphitic oxide surface can be achieved by varying the amount and type of oxidant. The resulting materials differ in level of oxidation, surface oxygen content and functionality. Most importantly, we show that these graphitic oxide materials are active as unique carbocatalysts for low-temperature aerobic epoxidation of linear alkenes in the absence of initiator or metal. An optimum level of oxidation is necessary and materials produced via conventional permanganate-based methods are far from optimal.

  15. Supervised Learning for Dynamical System Learning.

    PubMed

    Hefny, Ahmed; Downey, Carlton; Gordon, Geoffrey J

    2015-01-01

    Recently there has been substantial interest in spectral methods for learning dynamical systems. These methods are popular since they often offer a good tradeoff between computational and statistical efficiency. Unfortunately, they can be difficult to use and extend in practice: e.g., they can make it difficult to incorporate prior information such as sparsity or structure. To address this problem, we present a new view of dynamical system learning: we show how to learn dynamical systems by solving a sequence of ordinary supervised learning problems, thereby allowing users to incorporate prior knowledge via standard techniques such as L 1 regularization. Many existing spectral methods are special cases of this new framework, using linear regression as the supervised learner. We demonstrate the effectiveness of our framework by showing examples where nonlinear regression or lasso let us learn better state representations than plain linear regression does; the correctness of these instances follows directly from our general analysis.

  16. Thermal stresses due to cooling of a viscoelastic oceanic lithosphere

    USGS Publications Warehouse

    Denlinger, R.P.; Savage, W.Z.

    1989-01-01

    Instant-freezing methods inaccurately predict transient thermal stresses in rapidly cooling silicate glass plates because of the temperature dependent rheology of the material. The temperature dependent rheology of the lithosphere may affect the transient thermal stress distribution in a similar way, and for this reason we use a thermoviscoelastic model to estimate thermal stresses in young oceanic lithosphere. This theory is formulated here for linear creep processes that have an Arrhenius rate dependence on temperature. Our results show that the stress differences between instant freezing and linear thermoviscoelastic theory are most pronounced at early times (0-20 m.y. when the instant freezing stresses may be twice as large. The solutions for the two methods asymptotically approach the same solution with time. A comparison with intraplate seismicity shows that both methods underestimate the depth of compressional stresses inferred from the seismicity in a systematic way. -from Authors

  17. The Use of Linear Instrumental Variables Methods in Health Services Research and Health Economics: A Cautionary Note

    PubMed Central

    Terza, Joseph V; Bradford, W David; Dismuke, Clara E

    2008-01-01

    Objective To investigate potential bias in the use of the conventional linear instrumental variables (IV) method for the estimation of causal effects in inherently nonlinear regression settings. Data Sources Smoking Supplement to the 1979 National Health Interview Survey, National Longitudinal Alcohol Epidemiologic Survey, and simulated data. Study Design Potential bias from the use of the linear IV method in nonlinear models is assessed via simulation studies and real world data analyses in two commonly encountered regression setting: (1) models with a nonnegative outcome (e.g., a count) and a continuous endogenous regressor; and (2) models with a binary outcome and a binary endogenous regressor. Principle Findings The simulation analyses show that substantial bias in the estimation of causal effects can result from applying the conventional IV method in inherently nonlinear regression settings. Moreover, the bias is not attenuated as the sample size increases. This point is further illustrated in the survey data analyses in which IV-based estimates of the relevant causal effects diverge substantially from those obtained with appropriate nonlinear estimation methods. Conclusions We offer this research as a cautionary note to those who would opt for the use of linear specifications in inherently nonlinear settings involving endogeneity. PMID:18546544

  18. A look-ahead variant of the Lanczos algorithm and its application to the quasi-minimal residual method for non-Hermitian linear systems. Ph.D. Thesis - Massachusetts Inst. of Technology, Aug. 1991

    NASA Technical Reports Server (NTRS)

    Nachtigal, Noel M.

    1991-01-01

    The Lanczos algorithm can be used both for eigenvalue problems and to solve linear systems. However, when applied to non-Hermitian matrices, the classical Lanczos algorithm is susceptible to breakdowns and potential instabilities. In addition, the biconjugate gradient (BCG) algorithm, which is the natural generalization of the conjugate gradient algorithm to non-Hermitian linear systems, has a second source of breakdowns, independent of the Lanczos breakdowns. Here, we present two new results. We propose an implementation of a look-ahead variant of the Lanczos algorithm which overcomes the breakdowns by skipping over those steps where a breakdown or a near-breakdown would occur. The new algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products per step as the classical Lanczos algorithm without look-ahead. Based on the proposed look-ahead Lanczos algorithm, we then present a novel BCG-like approach, the quasi-minimal residual (QMR) method, which avoids the second source of breakdowns in the BCG algorithm. We present details of the new method and discuss some of its properties. In particular, we discuss the relationship between QMR and BCG, showing how one can recover the BCG iterates, when they exist, from the QMR iterates. We also present convergence results for QMR, showing the connection between QMR and the generalized minimal residual (GMRES) algorithm, the optimal method in this class of methods. Finally, we give some numerical examples, both for eigenvalue computations and for non-Hermitian linear systems.

  19. Linear ultrasonic motor for absolute gravimeter.

    PubMed

    Jian, Yue; Yao, Zhiyuan; Silberschmidt, Vadim V

    2017-05-01

    Thanks to their compactness and suitability for vacuum applications, linear ultrasonic motors are considered as substitutes for classical electromagnetic motors as driving elements in absolute gravimeters. Still, their application is prevented by relatively low power output. To overcome this limitation and provide better stability, a V-type linear ultrasonic motor with a new clamping method is proposed for a gravimeter. In this paper, a mechanical model of stators with flexible clamping components is suggested, according to a design criterion for clamps of linear ultrasonic motors. After that, an effect of tangential and normal rigidity of the clamping components on mechanical output is studied. It is followed by discussion of a new clamping method with sufficient tangential rigidity and a capability to facilitate pre-load. Additionally, a prototype of the motor with the proposed clamping method was fabricated and the performance tests in vertical direction were implemented. Experimental results show that the suggested motor has structural stability and high dynamic performance, such as no-load speed of 1.4m/s and maximal thrust of 43N, meeting the requirements for absolute gravimeters. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. [Study of the Detecting System of CH4 and SO2 Based on Spectral Absorption Method and UV Fluorescence Method].

    PubMed

    Wang, Shu-tao; Wang, Zhi-fang; Liu, Ming-hua; Wei, Meng; Chen, Dong-ying; Wang, Xing-long

    2016-01-01

    According to the spectral absorption characteristics of polluting gases and fluorescence characteristics, a time-division multiplexing detection system is designed. Through this system we can detect Methane (CH4) and sulfur dioxide (SO2) by using spectral absorption method and the SO2 can be detected by using UV fluorescence method. The system consists of four parts: a combination of a light source which could be switched, the common optical path, the air chamber and the signal processing section. The spectral absorption characteristics and fluorescence characteristics are measured first. Then the experiment of detecting CH4 and SO2 through spectral absorption method and the experiment of detecting SO2 through UV fluorescence method are conducted, respectively. Through measuring characteristics of spectral absorption and fluorescence, we get excitation wavelengths of SO2 and CH4 measured by spectral absorption method at the absorption peak are 280 nm and 1.64 μm, respectively, and the optimal excitation wavelength of SO2 measured by UV fluorescence method is 220 nm. we acquire the linear relation between the concentration of CH4 and relative intensity and the linear relation between the concentration of SO2 and output voltage after conducting the experiment of spectral absorption method, and the linearity are 98.7%, 99.2% respectively. Through the experiment of UV fluorescence method we acquire that the relation between the concentration of SO2 and the voltage is linear, and the linearity is 99.5%. Research shows that the system is able to be applied to detect the polluted gas by absorption spectrum method and UV fluorescence method. Combing these two measurement methods decreases the costing and the volume, and this system can also be used to measure the other gases. Such system has a certain value of application.

  1. Weighted least squares phase unwrapping based on the wavelet transform

    NASA Astrophysics Data System (ADS)

    Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia

    2007-01-01

    The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.

  2. Using Set Covering with Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models

    ERIC Educational Resources Information Center

    Huitzing, Hiddo A.

    2004-01-01

    This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…

  3. The Programming Language Python In Earth System Simulations

    NASA Astrophysics Data System (ADS)

    Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.

    2004-12-01

    Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.

  4. Linear least-squares method for global luminescent oil film skin friction field analysis

    NASA Astrophysics Data System (ADS)

    Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu

    2018-06-01

    A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.

  5. A comparative study of first-derivative spectrophotometry and column high-performance liquid chromatography applied to the determination of repaglinide in tablets and for dissolution testing.

    PubMed

    AlKhalidi, Bashar A; Shtaiwi, Majed; AlKhatib, Hatim S; Mohammad, Mohammad; Bustanji, Yasser

    2008-01-01

    A fast and reliable method for the determination of repaglinide is highly desirable to support formulation screening and quality control. A first-derivative UV spectroscopic method was developed for the determination of repaglinide in tablet dosage form and for dissolution testing. First-derivative UV absorbance was measured at 253 nm. The developed method was validated for linearity, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ) in comparison to the U.S. Pharmacopeia (USP) column high-performance liquid chromatographic (HPLC) method. The first-derivative UV spectrophotometric method showed excellent linearity [correlation coefficient (r) = 0.9999] in the concentration range of 1-35 microg/mL and precision (relative standard deviation < 1.5%). The LOD and LOQ were 0.23 and 0.72 microg/mL, respectively, and good recoveries were achieved (98-101.8%). Statistical comparison of results of the first-derivative UV spectrophotometric and the USP HPLC methods using the t-test showed that there was no significant difference between the 2 methods. Additionally, the method was successfully used for the dissolution test of repaglinide and was found to be reliable, simple, fast, and inexpensive.

  6. A comparison of radiometric correction techniques in the evaluation of the relationship between LST and NDVI in Landsat imagery.

    PubMed

    Tan, Kok Chooi; Lim, Hwee San; Matjafri, Mohd Zubir; Abdullah, Khiruddin

    2012-06-01

    Atmospheric corrections for multi-temporal optical satellite images are necessary, especially in change detection analyses, such as normalized difference vegetation index (NDVI) rationing. Abrupt change detection analysis using remote-sensing techniques requires radiometric congruity and atmospheric correction to monitor terrestrial surfaces over time. Two atmospheric correction methods were used for this study: relative radiometric normalization and the simplified method for atmospheric correction (SMAC) in the solar spectrum. A multi-temporal data set consisting of two sets of Landsat images from the period between 1991 and 2002 of Penang Island, Malaysia, was used to compare NDVI maps, which were generated using the proposed atmospheric correction methods. Land surface temperature (LST) was retrieved using ATCOR3_T in PCI Geomatica 10.1 image processing software. Linear regression analysis was utilized to analyze the relationship between NDVI and LST. This study reveals that both of the proposed atmospheric correction methods yielded high accuracy through examination of the linear correlation coefficients. To check for the accuracy of the equation obtained through linear regression analysis for every single satellite image, 20 points were randomly chosen. The results showed that the SMAC method yielded a constant value (in terms of error) to predict the NDVI value from linear regression analysis-derived equation. The errors (average) from both proposed atmospheric correction methods were less than 10%.

  7. An orthogonal return method for linearly polarized beam based on the Faraday effect and its application in interferometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Benyong, E-mail: chenby@zstu.edu.cn; Zhang, Enzheng; Yan, Liping

    2014-10-15

    Correct return of the measuring beam is essential for laser interferometers to carry out measurement. In the actual situation, because the measured object inevitably rotates or laterally moves, not only the measurement accuracy will decrease, or even the measurement will be impossibly performed. To solve this problem, a novel orthogonal return method for linearly polarized beam based on the Faraday effect is presented. The orthogonal return of incident linearly polarized beam is realized by using a Faraday rotator with the rotational angle of 45°. The optical configuration of the method is designed and analyzed in detail. To verify its practicabilitymore » in polarization interferometry, a laser heterodyne interferometer based on this method was constructed and precision displacement measurement experiments were performed. These results show that the advantage of the method is that the correct return of the incident measuring beam is ensured when large lateral displacement or angular rotation of the measured object occurs and then the implementation of interferometric measurement can be ensured.« less

  8. Method validation using weighted linear regression models for quantification of UV filters in water samples.

    PubMed

    da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues

    2015-01-01

    This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Sparse signals recovered by non-convex penalty in quasi-linear systems.

    PubMed

    Cui, Angang; Li, Haiyang; Wen, Meng; Peng, Jigen

    2018-01-01

    The goal of compressed sensing is to reconstruct a sparse signal under a few linear measurements far less than the dimension of the ambient space of the signal. However, many real-life applications in physics and biomedical sciences carry some strongly nonlinear structures, and the linear model is no longer suitable. Compared with the compressed sensing under the linear circumstance, this nonlinear compressed sensing is much more difficult, in fact also NP-hard, combinatorial problem, because of the discrete and discontinuous nature of the [Formula: see text]-norm and the nonlinearity. In order to get a convenience for sparse signal recovery, we set the nonlinear models have a smooth quasi-linear nature in this paper, and study a non-convex fraction function [Formula: see text] in this quasi-linear compressed sensing. We propose an iterative fraction thresholding algorithm to solve the regularization problem [Formula: see text] for all [Formula: see text]. With the change of parameter [Formula: see text], our algorithm could get a promising result, which is one of the advantages for our algorithm compared with some state-of-art algorithms. Numerical experiments show that our method performs much better than some state-of-the-art methods.

  10. Multigrid approaches to non-linear diffusion problems on unstructured meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    The efficiency of three multigrid methods for solving highly non-linear diffusion problems on two-dimensional unstructured meshes is examined. The three multigrid methods differ mainly in the manner in which the nonlinearities of the governing equations are handled. These comprise a non-linear full approximation storage (FAS) multigrid method which is used to solve the non-linear equations directly, a linear multigrid method which is used to solve the linear system arising from a Newton linearization of the non-linear system, and a hybrid scheme which is based on a non-linear FAS multigrid scheme, but employs a linear solver on each level as a smoother. Results indicate that all methods are equally effective at converging the non-linear residual in a given number of grid sweeps, but that the linear solver is more efficient in cpu time due to the lower cost of linear versus non-linear grid sweeps.

  11. Predicting flight delay based on multiple linear regression

    NASA Astrophysics Data System (ADS)

    Ding, Yi

    2017-08-01

    Delay of flight has been regarded as one of the toughest difficulties in aviation control. How to establish an effective model to handle the delay prediction problem is a significant work. To solve the problem that the flight delay is difficult to predict, this study proposes a method to model the arriving flights and a multiple linear regression algorithm to predict delay, comparing with Naive-Bayes and C4.5 approach. Experiments based on a realistic dataset of domestic airports show that the accuracy of the proposed model approximates 80%, which is further improved than the Naive-Bayes and C4.5 approach approaches. The result testing shows that this method is convenient for calculation, and also can predict the flight delays effectively. It can provide decision basis for airport authorities.

  12. A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.

    PubMed

    Ferrari, Alberto; Comelli, Mario

    2016-12-01

    In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. An improved conjugate gradient scheme to the solution of least squares SVM.

    PubMed

    Chu, Wei; Ong, Chong Jin; Keerthi, S Sathiya

    2005-03-01

    The least square support vector machines (LS-SVM) formulation corresponds to the solution of a linear system of equations. Several approaches to its numerical solutions have been proposed in the literature. In this letter, we propose an improved method to the numerical solution of LS-SVM and show that the problem can be solved using one reduced system of linear equations. Compared with the existing algorithm for LS-SVM, the approach used in this letter is about twice as efficient. Numerical results using the proposed method are provided for comparisons with other existing algorithms.

  14. Linear and non-linear interdependence of EEG and HRV frequency bands in human sleep.

    PubMed

    Chaparro-Vargas, Ramiro; Dissanayaka, P Chamila; Patti, Chanakya Reddy; Schilling, Claudia; Schredl, Michael; Cvetkovic, Dean

    2014-01-01

    The characterisation of functional interdependencies of the autonomic nervous system (ANS) stands an evergrowing interest to unveil electroencephalographic (EEG) and Heart Rate Variability (HRV) interactions. This paper presents a biosignal processing approach as a supportive computational resource in the estimation of sleep dynamics. The application of linear, non-linear methods and statistical tests upon 10 overnight polysomnographic (PSG) recordings, allowed the computation of wavelet coherence and phase locking values, in order to identify discerning features amongst the clinical healthy subjects. Our findings showed that neuronal oscillations θ, α and σ interact with cardiac power bands at mid-to-high rank of coherence and phase locking, particularly during NREM sleep stages.

  15. Combined linear theory/impact theory method for analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1980-01-01

    Pressure distributions on a wing body at Mach 4.63 are calculated. The combined theory is shown to give improved predictions over either linear theory or impact theory alone. The combined theory is also applied in the inverse design mode to calculate optimum camber slopes at Mach 4.63. Comparisons with optimum camber slopes obtained from unmodified linear theory show large differences. Analysis of the results indicate that the combined theory correctly predicts the effect of thickness on the loading distributions at high Mach numbers, and that finite thickness wings optimized at high Mach numbers using unmodified linear theory will not achieve the minimum drag characteristics for which they are designed.

  16. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    NASA Astrophysics Data System (ADS)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  17. Robust estimation for partially linear models with large-dimensional covariates

    PubMed Central

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2014-01-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087

  18. Robust estimation for partially linear models with large-dimensional covariates.

    PubMed

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  19. [Biometric identification method for ECG based on the piecewise linear representation (PLR) and dynamic time warping (DTW)].

    PubMed

    Yang, Licai; Shen, Jun; Bao, Shudi; Wei, Shoushui

    2013-10-01

    To treat the problem of identification performance and the complexity of the algorithm, we proposed a piecewise linear representation and dynamic time warping (PLR-DTW) method for ECG biometric identification. Firstly we detected R peaks to get the heartbeats after denoising preprocessing. Then we used the PLR method to keep important information of an ECG signal segment while reducing the data dimension at the same time. The improved DTW method was used for similarity measurements between the test data and the templates. The performance evaluation was carried out on the two ECG databases: PTB and MIT-BIH. The analystic results showed that compared to the discrete wavelet transform method, the proposed PLR-DTW method achieved a higher accuracy rate which is nearly 8% of rising, and saved about 30% operation time, and this demonstrated that the proposed method could provide a better performance.

  20. Explicit criteria for prioritization of cataract surgery

    PubMed Central

    Ma Quintana, José; Escobar, Antonio; Bilbao, Amaia

    2006-01-01

    Background Consensus techniques have been used previously to create explicit criteria to prioritize cataract extraction; however, the appropriateness of the intervention was not included explicitly in previous studies. We developed a prioritization tool for cataract extraction according to the RAND method. Methods Criteria were developed using a modified Delphi panel judgment process. A panel of 11 ophthalmologists was assembled. Ratings were analyzed regarding the level of agreement among panelists. We studied the effect of all variables on the final panel score using general linear and logistic regression models. Priority scoring systems were developed by means of optimal scaling and general linear models. The explicit criteria developed were summarized by means of regression tree analysis. Results Eight variables were considered to create the indications. Of the 310 indications that the panel evaluated, 22.6% were considered high priority, 52.3% intermediate priority, and 25.2% low priority. Agreement was reached for 31.9% of the indications and disagreement for 0.3%. Logistic regression and general linear models showed that the preoperative visual acuity of the cataractous eye, visual function, and anticipated visual acuity postoperatively were the most influential variables. Alternative and simple scoring systems were obtained by optimal scaling and general linear models where the previous variables were also the most important. The decision tree also shows the importance of the previous variables and the appropriateness of the intervention. Conclusion Our results showed acceptable validity as an evaluation and management tool for prioritizing cataract extraction. It also provides easy algorithms for use in clinical practice. PMID:16512893

  1. Nondestructive Measurement of Dynamic Modulus for Cellulose Nanofibril Films

    Treesearch

    Yan Qing; Robert J. Ross; Zhiyong Cai; Yiqiang Wu

    2013-01-01

    Nondestructive evaluation of cellulose nanofibril (CNF) films was performed using cantilever beam vibration (CBV) and acoustic methods to measure dynamic modulus. Static modulus was tested using tensile tension method. Correlation analysis shows the data measured by CBV has little linear relationship with static modulus, possessing a correlation coefficient (R

  2. ANOVA with Rasch Measures.

    ERIC Educational Resources Information Center

    Linacre, John Michael

    Various methods of estimating main effects from ordinal data are presented and contrasted. Problems discussed include: (1) at what level to accumulate ordinal data into linear measures; (2) how to maintain scaling across analyses; and (3) the inevitable confounding of within cell variance with measurement error. An example shows three methods of…

  3. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    NASA Astrophysics Data System (ADS)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  4. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan B., E-mail: wollaber@lanl.go; Larsen, Edward W., E-mail: edlarsen@umich.ed

    2011-02-20

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, wemore » also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.« less

  5. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    NASA Astrophysics Data System (ADS)

    Wollaber, Allan B.; Larsen, Edward W.

    2011-02-01

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used “Implicit Monte Carlo” (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or “Semi-Analog Monte Carlo” (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if α, the IMC time-discretization parameter, satisfies 0.5 < α ⩽ 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.

  6. Blood Density Is Nearly Equal to Water Density: A Validation Study of the Gravimetric Method of Measuring Intraoperative Blood Loss.

    PubMed

    Vitello, Dominic J; Ripper, Richard M; Fettiplace, Michael R; Weinberg, Guy L; Vitello, Joseph M

    2015-01-01

    Purpose. The gravimetric method of weighing surgical sponges is used to quantify intraoperative blood loss. The dry mass minus the wet mass of the gauze equals the volume of blood lost. This method assumes that the density of blood is equivalent to water (1 gm/mL). This study's purpose was to validate the assumption that the density of blood is equivalent to water and to correlate density with hematocrit. Methods. 50 µL of whole blood was weighed from eighteen rats. A distilled water control was weighed for each blood sample. The averages of the blood and water were compared utilizing a Student's unpaired, one-tailed t-test. The masses of the blood samples and the hematocrits were compared using a linear regression. Results. The average mass of the eighteen blood samples was 0.0489 g and that of the distilled water controls was 0.0492 g. The t-test showed P = 0.2269 and R (2) = 0.03154. The hematocrit values ranged from 24% to 48%. The linear regression R (2) value was 0.1767. Conclusions. The R (2) value comparing the blood and distilled water masses suggests high correlation between the two populations. Linear regression showed the hematocrit was not proportional to the mass of the blood. The study confirmed that the measured density of blood is similar to water.

  7. Calibration test of the temperature and strain sensitivity coefficient in regional reference grating method

    NASA Astrophysics Data System (ADS)

    Wu, Jing; Huang, Junbing; Wu, Hanping; Gu, Hongcan; Tang, Bo

    2014-12-01

    In order to verify the validity of the regional reference grating method in solve the strain/temperature cross sensitive problem in the actual ship structural health monitoring system, and to meet the requirements of engineering, for the sensitivity coefficients of regional reference grating method, national standard measurement equipment is used to calibrate the temperature sensitivity coefficient of selected FBG temperature sensor and strain sensitivity coefficient of FBG strain sensor in this modal. And the thermal expansion sensitivity coefficient of the steel for ships is calibrated with water bath method. The calibration results show that the temperature sensitivity coefficient of FBG temperature sensor is 28.16pm/°C within -10~30°C, and its linearity is greater than 0.999, the strain sensitivity coefficient of FBG strain sensor is 1.32pm/μɛ within -2900~2900μɛ whose linearity is almost to 1, the thermal expansion sensitivity coefficient of the steel for ships is 23.438pm/°C within 30~90°C, and its linearity is greater than 0.998. Finally, the calibration parameters are used in the actual ship structure health monitoring system for temperature compensation. The results show that the effect of temperature compensation is good, and the calibration parameters meet the engineering requirements, which provide an important reference for fiber Bragg grating sensor is widely used in engineering.

  8. A preliminary study of the thermal measurement with nMAG gel dosimeter by MRI

    NASA Astrophysics Data System (ADS)

    Chuang, Chun-Chao; Shao, Chia-Ho; Shih, Cheng-Ting; Yeh, Yu-Chen; Lu, Cheng-Chang; Chuang, Keh-Shih; Wu, Jay

    2014-11-01

    The methacrylic acid (nMAG) gel dosimeter is an effective tool for 3-dimensional quality assurance of radiation therapy. In addition to radiation induced polymerization effects, the nMAG gel also responds to temperature variation. In this study, we proposed a new method to evaluate the thermal response in thermal therapy using nMAG gel and magnetic resonance image (MRI) scans. Several properties of nMAG have been investigated including the R2 relaxation rate, temperature sensitivity, and temperature linearity of the thermal dose response. nMAG was heated by the double-boiling method in the range of 37-45 °C. MRI scans were performed with the head coil receiver. The temperature to R2 response curve was analyzed and simple linear regression was performed with an R-square value of 0.9835. The measured data showed a well inverse linear relationship between R2 and temperature. We conclude that the nMAG polymer gel dosimeter shows great potential as a technique to evaluate the temperature rise during thermal surgery.

  9. Extraction of linear features on SAR imagery

    NASA Astrophysics Data System (ADS)

    Liu, Junyi; Li, Deren; Mei, Xin

    2006-10-01

    Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.

  10. Stiffness optimization of non-linear elastic structures

    DOE PAGES

    Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel

    2017-11-13

    Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less

  11. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  12. Stiffness optimization of non-linear elastic structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel

    Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less

  13. Drug-Target Interaction Prediction through Label Propagation with Linear Neighborhood Information.

    PubMed

    Zhang, Wen; Chen, Yanlin; Li, Dingfang

    2017-11-25

    Interactions between drugs and target proteins provide important information for the drug discovery. Currently, experiments identified only a small number of drug-target interactions. Therefore, the development of computational methods for drug-target interaction prediction is an urgent task of theoretical interest and practical significance. In this paper, we propose a label propagation method with linear neighborhood information (LPLNI) for predicting unobserved drug-target interactions. Firstly, we calculate drug-drug linear neighborhood similarity in the feature spaces, by considering how to reconstruct data points from neighbors. Then, we take similarities as the manifold of drugs, and assume the manifold unchanged in the interaction space. At last, we predict unobserved interactions between known drugs and targets by using drug-drug linear neighborhood similarity and known drug-target interactions. The experiments show that LPLNI can utilize only known drug-target interactions to make high-accuracy predictions on four benchmark datasets. Furthermore, we consider incorporating chemical structures into LPLNI models. Experimental results demonstrate that the model with integrated information (LPLNI-II) can produce improved performances, better than other state-of-the-art methods. The known drug-target interactions are an important information source for computational predictions. The usefulness of the proposed method is demonstrated by cross validation and the case study.

  14. Explicit methods in extended phase space for inseparable Hamiltonian problems

    NASA Astrophysics Data System (ADS)

    Pihajoki, Pauli

    2015-03-01

    We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.

  15. Simultaneous fitting of genomic-BLUP and Bayes-C components in a genomic prediction model.

    PubMed

    Iheshiulor, Oscar O M; Woolliams, John A; Svendsen, Morten; Solberg, Trygve; Meuwissen, Theo H E

    2017-08-24

    The rapid adoption of genomic selection is due to two key factors: availability of both high-throughput dense genotyping and statistical methods to estimate and predict breeding values. The development of such methods is still ongoing and, so far, there is no consensus on the best approach. Currently, the linear and non-linear methods for genomic prediction (GP) are treated as distinct approaches. The aim of this study was to evaluate the implementation of an iterative method (called GBC) that incorporates aspects of both linear [genomic-best linear unbiased prediction (G-BLUP)] and non-linear (Bayes-C) methods for GP. The iterative nature of GBC makes it less computationally demanding similar to other non-Markov chain Monte Carlo (MCMC) approaches. However, as a Bayesian method, GBC differs from both MCMC- and non-MCMC-based methods by combining some aspects of G-BLUP and Bayes-C methods for GP. Its relative performance was compared to those of G-BLUP and Bayes-C. We used an imputed 50 K single-nucleotide polymorphism (SNP) dataset based on the Illumina Bovine50K BeadChip, which included 48,249 SNPs and 3244 records. Daughter yield deviations for somatic cell count, fat yield, milk yield, and protein yield were used as response variables. GBC was frequently (marginally) superior to G-BLUP and Bayes-C in terms of prediction accuracy and was significantly better than G-BLUP only for fat yield. On average across the four traits, GBC yielded a 0.009 and 0.006 increase in prediction accuracy over G-BLUP and Bayes-C, respectively. Computationally, GBC was very much faster than Bayes-C and similar to G-BLUP. Our results show that incorporating some aspects of G-BLUP and Bayes-C in a single model can improve accuracy of GP over the commonly used method: G-BLUP. Generally, GBC did not statistically perform better than G-BLUP and Bayes-C, probably due to the close relationships between reference and validation individuals. Nevertheless, it is a flexible tool, in the sense, that it simultaneously incorporates some aspects of linear and non-linear models for GP, thereby exploiting family relationships while also accounting for linkage disequilibrium between SNPs and genes with large effects. The application of GBC in GP merits further exploration.

  16. Noise and linearity optimization methods for a 1.9GHz low noise amplifier.

    PubMed

    Guo, Wei; Huang, Da-Quan

    2003-01-01

    Noise and linearity performances are critical characteristics for radio frequency integrated circuits (RFICs), especially for low noise amplifiers (LNAs). In this paper, a detailed analysis of noise and linearity for the cascode architecture, a widely used circuit structure in LNA designs, is presented. The noise and the linearity improvement techniques for cascode structures are also developed and have been proven by computer simulating experiments. Theoretical analysis and simulation results showed that, for cascode structure LNAs, the first metallic oxide semiconductor field effect transistor (MOSFET) dominates the noise performance of the LNA, while the second MOSFET contributes more to the linearity. A conclusion is thus obtained that the first and second MOSFET of the LNA can be designed to optimize the noise performance and the linearity performance separately, without trade-offs. The 1.9GHz Complementary Metal-Oxide-Semiconductor (CMOS) LNA simulation results are also given as an application of the developed theory.

  17. Optimal blood glucose control in diabetes mellitus treatment using dynamic programming based on Ackerman’s linear model

    NASA Astrophysics Data System (ADS)

    Pradanti, Paskalia; Hartono

    2018-03-01

    Determination of insulin injection dose in diabetes mellitus treatment can be considered as an optimal control problem. This article is aimed to simulate optimal blood glucose control for patient with diabetes mellitus. The blood glucose regulation of diabetic patient is represented by Ackerman’s Linear Model. This problem is then solved using dynamic programming method. The desired blood glucose level is obtained by minimizing the performance index in Lagrange form. The results show that dynamic programming based on Ackerman’s Linear Model is quite good to solve the problem.

  18. A tutorial description of an interior point method and its applications to security-constrained economic dispatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vargas, L.S.; Quintana, V.H.; Vannelli, A.

    This paper deals with the use of Successive Linear Programming (SLP) for the solution of the Security-Constrained Economic Dispatch (SCED) problem. The authors tutorially describe an Interior Point Method (IPM) for the solution of Linear Programming (LP) problems, discussing important implementation issues that really make this method far superior to the simplex method. A study of the convergence of the SLP technique and a practical criterion to avoid oscillatory behavior in the iteration process are also proposed. A comparison of the proposed method with an efficient simplex code (MINOS) is carried out by solving SCED problems on two standard IEEEmore » systems. The results show that the interior point technique is reliable, accurate and more than two times faster than the simplex algorithm.« less

  19. Comparison of various methods for mathematical analysis of the Foucault knife edge test pattern to determine optical imperfections

    NASA Technical Reports Server (NTRS)

    Gatewood, B. E.

    1971-01-01

    The linearized integral equation for the Foucault test of a solid mirror was solved by various methods: power series, Fourier series, collocation, iteration, and inversion integral. The case of the Cassegrain mirror was solved by a particular power series method, collocation, and inversion integral. The inversion integral method appears to be the best overall method for both the solid and Cassegrain mirrors. Certain particular types of power series and Fourier series are satisfactory for the Cassegrain mirror. Numerical integration of the nonlinear equation for selected surface imperfections showed that results start to deviate from those given by the linearized equation at a surface deviation of about 3 percent of the wavelength of light. Several possible procedures for calibrating and scaling the input data for the integral equation are described.

  20. A new approach for assessment of wear in metal-backed acetabular cups using computed tomography: a phantom study with retrievals.

    PubMed

    Jedenmalm, Anneli; Noz, Marilyn E; Olivecrona, Henrik; Olivecrona, Lotta; Stark, Andre

    2008-04-01

    Polyethylene wear is an important cause of aseptic loosening in hip arthroplasty. Detection of significant wear usually happens late on, since available diagnostic techniques are either not sensitive enough or too complicated and expensive for routine use. This study evaluates a new approach for measurement of linear wear of metal-backed acetabular cups using CT as the intended clinically feasible method. 8 retrieved uncemented metal-backed acetabular cups were scanned twice ex vivo using CT. The linear penetration depth of the femoral head into the cup was measured in the CT volumes using dedicated software. Landmark points were placed on the CT images of cup and head, and also on a reference plane in order to calculate the wear vector magnitude and angle to one of the axes. A coordinate-measuring machine was used to test the accuracy of the proposed CT method. For this purpose, the head diameters were also measured by both methods. Accuracy of the CT method for linear wear measurements was 0.6 mm and wear vector angle was 27 degrees . No systematic difference was found between CT scans. This study on explanted acetabular cups shows that CT is capable of reliable measurement of linear wear in acetabular cups at a clinically relevant level of accuracy. It was also possible to use the method for assessment of direction of wear.

  1. An accuracy improvement method for the topology measurement of an atomic force microscope using a 2D wavelet transform.

    PubMed

    Yoon, Yeomin; Noh, Suwoo; Jeong, Jiseong; Park, Kyihwan

    2018-05-01

    The topology image is constructed from the 2D matrix (XY directions) of heights Z captured from the force-feedback loop controller. For small height variations, nonlinear effects such as hysteresis or creep of the PZT-driven Z nano scanner can be neglected and its calibration is quite straightforward. For large height variations, the linear approximation of the PZT-driven Z nano scanner fail and nonlinear behaviors must be considered because this would cause inaccuracies in the measurement image. In order to avoid such inaccuracies, an additional strain gauge sensor is used to directly measure displacement of the PZT-driven Z nano scanner. However, this approach also has a disadvantage in its relatively low precision. In order to obtain high precision data with good linearity, we propose a method of overcoming the low precision problem of the strain gauge while its feature of good linearity is maintained. We expect that the topology image obtained from the strain gauge sensor showing significant noise at high frequencies. On the other hand, the topology image obtained from the controller output showing low noise at high frequencies. If the low and high frequency signals are separable from both topology images, the image can be constructed so that it is represented with high accuracy and low noise. In order to separate the low frequencies from high frequencies, a 2D Haar wavelet transform is used. Our proposed method use the 2D wavelet transform for obtaining good linearity from strain gauge sensor and good precision from controller output. The advantages of the proposed method are experimentally validated by using topology images. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.

    2017-12-01

    We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.

  3. A robust, efficient equidistribution 2D grid generation method

    NASA Astrophysics Data System (ADS)

    Chacon, Luis; Delzanno, Gian Luca; Finn, John; Chung, Jeojin; Lapenta, Giovanni

    2007-11-01

    We present a new cell-area equidistribution method for two- dimensional grid adaptation [1]. The method is able to satisfy the equidistribution constraint to arbitrary precision while optimizing desired grid properties (such as isotropy and smoothness). The method is based on the minimization of the grid smoothness integral, constrained to producing a given positive-definite cell volume distribution. The procedure gives rise to a single, non-linear scalar equation with no free-parameters. We solve this equation numerically with the Newton-Krylov technique. The ellipticity property of the linearized scalar equation allows multigrid preconditioning techniques to be effectively used. We demonstrate a solution exists and is unique. Therefore, once the solution is found, the adapted grid cannot be folded due to the positivity of the constraint on the cell volumes. We present several challenging tests to show that our new method produces optimal grids in which the constraint is satisfied numerically to arbitrary precision. We also compare the new method to the deformation method [2] and show that our new method produces better quality grids. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, in preparation. [2] G. Liao and D. Anderson, A new approach to grid generation, Appl. Anal. 44, 285--297 (1992).

  4. A Lagrangian meshfree method applied to linear and nonlinear elasticity.

    PubMed

    Walker, Wade A

    2017-01-01

    The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.

  5. A Lagrangian meshfree method applied to linear and nonlinear elasticity

    PubMed Central

    2017-01-01

    The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code. PMID:29045443

  6. A spline-based non-linear diffeomorphism for multimodal prostate registration.

    PubMed

    Mitra, Jhimli; Kato, Zoltan; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Sidibé, Désiré; Ghose, Soumya; Vilanova, Joan C; Comet, Josep; Meriaudeau, Fabrice

    2012-08-01

    This paper presents a novel method for non-rigid registration of transrectal ultrasound and magnetic resonance prostate images based on a non-linear regularized framework of point correspondences obtained from a statistical measure of shape-contexts. The segmented prostate shapes are represented by shape-contexts and the Bhattacharyya distance between the shape representations is used to find the point correspondences between the 2D fixed and moving images. The registration method involves parametric estimation of the non-linear diffeomorphism between the multimodal images and has its basis in solving a set of non-linear equations of thin-plate splines. The solution is obtained as the least-squares solution of an over-determined system of non-linear equations constructed by integrating a set of non-linear functions over the fixed and moving images. However, this may not result in clinically acceptable transformations of the anatomical targets. Therefore, the regularized bending energy of the thin-plate splines along with the localization error of established correspondences should be included in the system of equations. The registration accuracies of the proposed method are evaluated in 20 pairs of prostate mid-gland ultrasound and magnetic resonance images. The results obtained in terms of Dice similarity coefficient show an average of 0.980±0.004, average 95% Hausdorff distance of 1.63±0.48 mm and mean target registration and target localization errors of 1.60±1.17 mm and 0.15±0.12 mm respectively. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. A hybrid robust fault tolerant control based on adaptive joint unscented Kalman filter.

    PubMed

    Shabbouei Hagh, Yashar; Mohammadi Asl, Reza; Cocquempot, Vincent

    2017-01-01

    In this paper, a new hybrid robust fault tolerant control scheme is proposed. A robust H ∞ control law is used in non-faulty situation, while a Non-Singular Terminal Sliding Mode (NTSM) controller is activated as soon as an actuator fault is detected. Since a linear robust controller is designed, the system is first linearized through the feedback linearization method. To switch from one controller to the other, a fuzzy based switching system is used. An Adaptive Joint Unscented Kalman Filter (AJUKF) is used for fault detection and diagnosis. The proposed method is based on the simultaneous estimation of the system states and parameters. In order to show the efficiency of the proposed scheme, a simulated 3-DOF robotic manipulator is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Rapid detection of Escherichia coli and enterococci in recreational water using an immunomagnetic separation/adenosine triphosphate technique

    USGS Publications Warehouse

    Bushon, R.N.; Brady, A.M.; Likirdopulos, C.A.; Cireddu, J.V.

    2009-01-01

    Aims: The aim of this study was to examine a rapid method for detecting Escherichia coli and enterococci in recreational water. Methods and Results: Water samples were assayed for E. coli and enterococci by traditional and immunomagnetic separation/adenosine triphosphate (IMS/ATP) methods. Three sample treatments were evaluated for the IMS/ATP method: double filtration, single filtration, and direct analysis. Pearson's correlation analysis showed strong, significant, linear relations between IMS/ATP and traditional methods for all sample treatments; strongest linear correlations were with the direct analysis (r = 0.62 and 0.77 for E. coli and enterococci, respectively). Additionally, simple linear regression was used to estimate bacteria concentrations as a function of IMS/ATP results. The correct classification of water-quality criteria was 67% for E. coli and 80% for enterococci. Conclusions: The IMS/ATP method is a viable alternative to traditional methods for faecal-indicator bacteria. Significance and Impact of the Study: The IMS/ATP method addresses critical public health needs for the rapid detection of faecal-indicator contamination and has potential for satisfying US legislative mandates requiring methods to detect bathing water contamination in 2 h or less. Moreover, IMS/ATP equipment is considerably less costly and more portable than that for molecular methods, making the method suitable for field applications. ?? 2009 The Authors.

  9. Rapid Development and Validation of Improved Reversed-Phase High-performance Liquid Chromatography Method for the Quantification of Mangiferin, a Polyphenol Xanthone Glycoside in Mangifera indica

    PubMed Central

    Naveen, P.; Lingaraju, H. B.; Prasad, K. Shyam

    2017-01-01

    Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica, is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica. RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography–mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica. SUMMARY The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica. The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica. Abbreviations Used: M. indica: Mangifera indica, RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification. PMID:28539748

  10. Rapid Development and Validation of Improved Reversed-Phase High-performance Liquid Chromatography Method for the Quantification of Mangiferin, a Polyphenol Xanthone Glycoside in Mangifera indica.

    PubMed

    Naveen, P; Lingaraju, H B; Prasad, K Shyam

    2017-01-01

    Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica , is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica . RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography-mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica . The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica . The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica . Abbreviations Used: M. indica : Mangifera indica , RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification.

  11. Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D., E-mail: sergei.ivanov@uni-rostock.de

    2015-06-28

    Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied,more » usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.« less

  12. On some Aitken-like acceleration of the Schwarz method

    NASA Astrophysics Data System (ADS)

    Garbey, M.; Tromeur-Dervout, D.

    2002-12-01

    In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.

  13. What kind of Relationship is Between Body Mass Index and Body Fat Percentage?

    PubMed

    Kupusinac, Aleksandar; Stokić, Edita; Sukić, Enes; Rankov, Olivera; Katić, Andrea

    2017-01-01

    Although body mass index (BMI) and body fat percentage (B F %) are well known as indicators of nutritional status, there are insuficient data whether the relationship between them is linear or not. There are appropriate linear and quadratic formulas that are available to predict B F % from age, gender and BMI. On the other hand, our previous research has shown that artificial neural network (ANN) is a more accurate method for that. The aim of this study is to analyze relationship between BMI and B F % by using ANN and big dataset (3058 persons). Our results show that this relationship is rather quadratic than linear for both gender and all age groups. Comparing genders, quadratic relathionship is more pronounced in women, while linear relationship is more pronounced in men. Additionaly, our results show that quadratic relationship is more pronounced in old than in young and middle-age men and it is slightly more pronounced in young and middle-age than in old women.

  14. A METHOD TO EXTRACT THE REDSHIFT DISTORTION {beta} PARAMETER IN CONFIGURATION SPACE FROM MINIMAL COSMOLOGICAL ASSUMPTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tocchini-Valentini, Domenico; Barnard, Michael; Bennett, Charles L.

    2012-10-01

    We present a method to extract the redshift-space distortion {beta} parameter in configuration space with a minimal set of cosmological assumptions. We show that a novel combination of the observed monopole and quadrupole correlation functions can remove efficiently the impact of mild nonlinearities and redshift errors. The method offers a series of convenient properties: it does not depend on the theoretical linear correlation function, the mean galaxy density is irrelevant, only convolutions are used, and there is no explicit dependence on linear bias. Analyses based on dark matter N-body simulations and Fisher matrix demonstrate that errors of a few percentmore » on {beta} are possible with a full-sky, 1 (h {sup -1} Gpc){sup 3} survey centered at a redshift of unity and with negligible shot noise. We also find a baryonic feature in the normalized quadrupole in configuration space that should complicate the extraction of the growth parameter from the linear theory asymptote, but that does not have a major impact on our method.« less

  15. Estimating population trends with a linear model

    USGS Publications Warehouse

    Bart, Jonathan; Collins, Brian D.; Morrison, R.I.G.

    2003-01-01

    We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.

  16. Improvement of resolution in full-view linear-array photoacoustic computed tomography using a novel adaptive weighting method

    NASA Astrophysics Data System (ADS)

    Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza

    2017-03-01

    Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.

  17. The calculation of steady non-linear transonic flow over finite wings with linear theory aerodynamics

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1976-01-01

    The feasibility of calculating steady mean flow solutions for nonlinear transonic flow over finite wings with a linear theory aerodynamic computer program is studied. The methodology is based on independent solutions for upper and lower surface pressures that are coupled through the external flow fields. Two approaches for coupling the solutions are investigated which include the diaphragm and the edge singularity method. The final method is a combination of both where a line source along the wing leading edge is used to account for blunt nose airfoil effects; and the upper and lower surface flow fields are coupled through a diaphragm in the plane of the wing. An iterative solution is used to arrive at the nonuniform flow solution for both nonlifting and lifting cases. Final results for a swept tapered wing in subcritical flow show that the method converges in three iterations and gives excellent agreement with experiment at alpha = 0 deg and 2 deg. Recommendations are made for development of a procedure for routine application.

  18. Survey and analysis of research on supersonic drag-due-to-lift minimization with recommendations for wing design

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Mann, Michael J.

    1992-01-01

    A survey of research on drag-due-to-lift minimization at supersonic speeds, including a study of the effectiveness of current design and analysis methods was conducted. The results show that a linearized theory analysis with estimated attainable thrust and vortex force effects can predict with reasonable accuracy the lifting efficiency of flat wings. Significantly better wing performance can be achieved through the use of twist and camber. Although linearized theory methods tend to overestimate the amount of twist and camber required for a given application and provide an overly optimistic performance prediction, these deficiencies can be overcome by implementation of recently developed empirical corrections. Numerous examples of the correlation of experiment and theory are presented to demonstrate the applicability and limitations of linearized theory methods with and without empirical corrections. The use of an Euler code for the estimation of aerodynamic characteristics of a twisted and cambered wing and its application to design by iteration are discussed.

  19. Neural network and multiple linear regression to predict school children dimensions for ergonomic school furniture design.

    PubMed

    Agha, Salah R; Alnahhal, Mohammed J

    2012-11-01

    The current study investigates the possibility of obtaining the anthropometric dimensions, critical to school furniture design, without measuring all of them. The study first selects some anthropometric dimensions that are easy to measure. Two methods are then used to check if these easy-to-measure dimensions can predict the dimensions critical to the furniture design. These methods are multiple linear regression and neural networks. Each dimension that is deemed necessary to ergonomically design school furniture is expressed as a function of some other measured anthropometric dimensions. Results show that out of the five dimensions needed for chair design, four can be related to other dimensions that can be measured while children are standing. Therefore, the method suggested here would definitely save time and effort and avoid the difficulty of dealing with students while measuring these dimensions. In general, it was found that neural networks perform better than multiple linear regression in the current study. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  20. A stable high-order perturbation of surfaces method for numerical simulation of diffraction problems in triply layered media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu

    The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution ofmore » dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.« less

  1. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    PubMed

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  2. Experimental validation of spatial Fourier transform-based multiple sound zone generation with a linear loudspeaker array.

    PubMed

    Okamoto, Takuma; Sakaguchi, Atsushi

    2017-03-01

    Generating acoustically bright and dark zones using loudspeakers is gaining attention as one of the most important acoustic communication techniques for such uses as personal sound systems and multilingual guide services. Although most conventional methods are based on numerical solutions, an analytical approach based on the spatial Fourier transform with a linear loudspeaker array has been proposed, and its effectiveness has been compared with conventional acoustic energy difference maximization and presented by computer simulations. To describe the effectiveness of the proposal in actual environments, this paper investigates the experimental validation of the proposed approach with rectangular and Hann windows and compared it with three conventional methods: simple delay-and-sum beamforming, contrast maximization, and least squares-based pressure matching using an actually implemented linear array of 64 loudspeakers in an anechoic chamber. The results of both the computer simulations and the actual experiments show that the proposed approach with a Hann window more accurately controlled the bright and dark zones than the conventional methods.

  3. Design of high-linear CMOS circuit using a constant transconductance method for gamma-ray spectroscopy system

    NASA Astrophysics Data System (ADS)

    Jung, I. I.; Lee, J. H.; Lee, C. S.; Choi, Y.-W.

    2011-02-01

    We propose a novel circuit to be applied to the front-end integrated circuits of gamma-ray spectroscopy systems. Our circuit is designed as a type of current conveyor (ICON) employing a constant- gm (transconductance) method which can significantly improve the linearity in the amplified signals by using a large time constant and the time-invariant characteristics of an amplifier. The constant- gm method is obtained by a feedback control which keeps the transconductance of the input transistor constant. To verify the performance of the propose circuit, the time constant variations for the channel resistances are simulated with the TSMC 0.18 μm transistor parameters using HSPICE, and then compared with those of a conventional ICON. As a result, the proposed ICON shows only 0.02% output linearity variation and 0.19% time constant variation for the input amplitude up to 100 mV. These are significantly small values compared to a conventional ICON's 1.39% and 19.43%, respectively, for the same conditions.

  4. A non-linear regression method for CT brain perfusion analysis

    NASA Astrophysics Data System (ADS)

    Bennink, E.; Oosterbroek, J.; Viergever, M. A.; Velthuis, B. K.; de Jong, H. W. A. M.

    2015-03-01

    CT perfusion (CTP) imaging allows for rapid diagnosis of ischemic stroke. Generation of perfusion maps from CTP data usually involves deconvolution algorithms providing estimates for the impulse response function in the tissue. We propose the use of a fast non-linear regression (NLR) method that we postulate has similar performance to the current academic state-of-art method (bSVD), but that has some important advantages, including the estimation of vascular permeability, improved robustness to tracer-delay, and very few tuning parameters, that are all important in stroke assessment. The aim of this study is to evaluate the fast NLR method against bSVD and a commercial clinical state-of-art method. The three methods were tested against a published digital perfusion phantom earlier used to illustrate the superiority of bSVD. In addition, the NLR and clinical methods were also tested against bSVD on 20 clinical scans. Pearson correlation coefficients were calculated for each of the tested methods. All three methods showed high correlation coefficients (>0.9) with the ground truth in the phantom. With respect to the clinical scans, the NLR perfusion maps showed higher correlation with bSVD than the perfusion maps from the clinical method. Furthermore, the perfusion maps showed that the fast NLR estimates are robust to tracer-delay. In conclusion, the proposed fast NLR method provides a simple and flexible way of estimating perfusion parameters from CT perfusion scans, with high correlation coefficients. This suggests that it could be a better alternative to the current clinical and academic state-of-art methods.

  5. Extended linear detection range for optical tweezers using image-plane detection scheme

    NASA Astrophysics Data System (ADS)

    Hajizadeh, Faegheh; Masoumeh Mousavi, S.; Khaksar, Zeinab S.; Reihani, S. Nader S.

    2014-10-01

    Ability to measure pico- and femto-Newton range forces using optical tweezers (OT) strongly relies on the sensitivity of its detection system. We show that the commonly used back-focal-plane detection method provides a linear response range which is shorter than that of the restoring force of OT for large beads. This limits measurable force range of OT. We show, both theoretically and experimentally, that utilizing a second laser beam for tracking could solve the problem. We also propose a new detection scheme in which the quadrant photodiode is positioned at the plane optically conjugate to the object plane (image plane). This method solves the problem without need for a second laser beam for the bead sizes that are commonly used in force spectroscopy applications of OT, such as biopolymer stretching.

  6. RP-HPLC method development and validation for simultaneous estimation of atorvastatin calcium and pioglitazone hydrochloride in pharmaceutical dosage form.

    PubMed

    Peraman, Ramalingam; Mallikarjuna, Sasikala; Ammineni, Pravalika; Kondreddy, Vinod kumar

    2014-10-01

    A simple, selective, rapid, precise and economical reversed-phase high-performance liquid chromatographic (RP-HPLC) method has been developed for simultaneous estimation of atorvastatin calcium (ATV) and pioglitazone hydrochloride (PIO) from pharmaceutical formulation. The method is carried out on a C8 (25 cm × 4.6 mm i.d., 5 μm) column with a mobile phase consisting of acetonitrile (ACN):water (pH adjusted to 6.2 using o-phosphoric acid) in the ratio of 45:55 (v/v). The retention time of ATV and PIO is 4.1 and 8.1 min, respectively, with the flow rate of 1 mL/min with diode array detector detection at 232 nm. The linear regression analysis data from the linearity plot showed good linear relationship with a correlation coefficient (R(2)) value for ATV and PIO of 0.9998 and 0.9997 in the concentration range of 10-80 µg mL(-1), respectively. The relative standard deviation for intraday precision has been found to be <2.0%. The method is validated according to the ICH guidelines. The developed method is validated in terms of specificity, selectivity, accuracy, precision, linearity, limit of detection, limit of quantitation and solution stability. The proposed method can be used for simultaneous estimation of these drugs in marketed dosage forms. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Analytical parameters of the microplate-based ORAC-pyrogallol red assay.

    PubMed

    Ortiz, Rocío; Antilén, Mónica; Speisky, Hernán; Aliaga, Margarita E; López-Alarcón, Camilo

    2011-01-01

    The analytical parameters of the microplate-based oxygen radicals absorbance capacity (ORAC) method using pyrogallol red (PGR) as probe (ORAC-PGR) are presented. In addition, the antioxidant capacity of commercial beverages, such as wines, fruit juices, and iced teas, is estimated. A good linearity of the area under the curve (AUC) versus Trolox concentration plots was obtained [AUC = (845 +/- 110) + (23 +/- 2) [Trolox, microM], R = 0.9961, n = 19]. QC experiments showed better precision and accuracy at the highest Trolox concentration (40 microM) with RSD and REC (recuperation) values of 1.7 and 101.0%, respectively. When red wine was used as sample, the method also showed good linearity [AUC = (787 +/- 77) + (690 +/- 60) [red wine, microL/mL]; R = 0.9926, n = 17], precision and accuracy with RSD values from 1.4 to 8.3%, and REC values that ranged from 89.7 to 103.8%. Additivity assays using solutions containing gallic acid and Trolox (or red wine) showed an additive protection of PGR given by the samples. Red wines showed higher ORAC-PGR values than white wines, while the ORAC-PGR index of fruit juices and iced teas presented a great variability, ranging from 0.6 to 21.6 mM of Trolox equivalents. This variability was also observed for juices of the same fruit, showing the influence of the brand on the ORAC-PGR index. The ORAC-PGR methodology can be applied in a microplate reader with good linearity, precision, and accuracy.

  8. Sampling with poling-based flux balance analysis: optimal versus sub-optimal flux space analysis of Actinobacillus succinogenes.

    PubMed

    Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos

    2015-02-18

    Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without linear bias to show optimal versus sub-optimal solution spaces. Basic analysis of the Actinobacillus succinogenes system using sampling shows that in order to achieve the maximal succinic acid production CO₂ must be taken into the system. Solutions involving release of CO₂ all give sub-optimal succinic acid production.

  9. Switched capacitor charge pump used for low-distortion imaging in atomic force microscope.

    PubMed

    Zhang, Jie; Zhang, Lian Sheng; Feng, Zhi Hua

    2015-01-01

    The switched capacitor charge pump (SCCP) is an effective method of linearizing charges on piezoelectric actuators and therefore constitute a significant approach to nano-positioning. In this work, it was for the first time implemented in an atomic force microscope for low-distortion imaging. Experimental results showed that the image quality was improved evidently under the SCCP drive compared with that under traditional linear voltage drive. © Wiley Periodicals, Inc.

  10. FAST TRACK PAPER: Non-iterative multiple-attenuation methods: linear inverse solutions to non-linear inverse problems - II. BMG approximation

    NASA Astrophysics Data System (ADS)

    Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing

    2004-12-01

    The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.

  11. Using air/water/sediment temperature contrasts to identify groundwater seepage locations in small streams

    NASA Astrophysics Data System (ADS)

    Karan, S.; Sebok, E.; Engesgaard, P. K.

    2016-12-01

    For identifying groundwater seepage locations in small streams within a headwater catchment, we present a method expanding on the linear regression of air and stream temperatures. Thus, by measuring the temperatures in dual-depth; in the stream column and at the streambed-water interface (SWI), we apply metrics from linear regression analysis of temperatures between air/stream and air/SWI (linear regression slope, intercept and coefficient of determination), and the daily mean temperatures (temperature variance and the average difference between the minimum and maximum daily temperatures). Our study show that using metrics from single-depth stream temperature measurements only are not sufficient to identify substantial groundwater seepage locations within a headwater stream. Conversely, comparing the metrics from dual-depth temperatures show significant differences so that at groundwater seepage locations, temperatures at the SWI, merely explain 43-75 % of the variation opposed to ≥91 % at the corresponding stream column temperatures. The figure showing a box-plot of the variation in daily mean temperature depict that at several locations there is great variation in the range the upper and lower loggers due to groundwater seepage. In general, the linear regression show that at these locations at the SWI, the slopes (<0.25) and intercepts (>6.5oC) are substantially lower and higher, while the mean diel amplitudes (<0.98oC) are decreased compared to remaining locations. The dual-depth approach was applied in a post-glacial fluvial setting, where metrics analyses overall corresponded to field measurements of groundwater fluxes deduced from vertical streambed temperatures and stream flow accretions. Thus, we propose a method reliably identifying groundwater seepage locations along streambed in such settings.

  12. Assessing non-linear variation of temperature and precipitation for different growth periods of maize and their impacts on phenology in the Midwest of Jilin Province, China

    NASA Astrophysics Data System (ADS)

    Guo, Enliang; Zhang, Jiquan; Wang, Yongfang; Alu, Si; Wang, Rui; Li, Danjun; Ha, Si

    2018-05-01

    In the past two decades, the regional climate in China has undergone significant change, resulting in crop yield reduction and complete failure. The goal of this study is to detect the variation of temperature and precipitation for different growth periods of maize and assess their impact on phenology. The daily meteorological data in the Midwest of Jilin Province during 1960-2014 were used in the study. The ensemble empirical mode decomposition method was adopted to analyze the non-linear trend and fluctuation in temperature and precipitation, and the sensitivity of the length of the maize growth period to temperature and precipitation was analyzed by the wavelet cross-transformation method. The results show that the trends of temperature and precipitation change are non-linear for different growth periods of maize, and the average temperature in the sowing-jointing stage was different from that in the other growth stages, showing a slight decrease trend, while the variation amplitude of maximum temperature is smaller than that of the minimum temperature. This indicates that the temperature difference between day and night shows a gradually decreasing trend. Precipitation in the growth period also showed a decreasing non-linear trend, while the inter-annual variability with period of quasi-3-year and quasi-6-year dominated the variation of temperature and precipitation. The whole growth period was shortened by 10.7 days, and the sowing date was advanced by approximately 11 days. We also found that there was a significant resonance period among temperature, precipitation, and phenology. Overall, a negative correlation between phenology and temperature is evident, while a positive correlation with precipitation is exhibited. The results illustrate that the climate suitability for maize has reduced over the past decades.

  13. [Determination of sennosides and degraded products in the process of sennoside metabolism by HPLC].

    PubMed

    Sun, Yan; Li, Xuetuo; Yu, Xingju

    2004-01-01

    A method for the separation and determination of sennosides A and B and the main composition (sennidins A and B) in degraded products of sennosides by linear gradient high performance liquid chromatography has been developed. Separation conditions were as follows: column, a Spherisorb C18 column (250 mm x 4.6 mm i.d., 10 microm); column temperature, 40 degrees C; detection wavelength, 360 nm; mobile phase A, 1.25% acetic acid aqueous solution; mobile phase B, methanol; linear gradient, 100% A --> (20 min) 100% B. The method is effective, quick, accurate and reproducible. The satisfactory results show that this new method has certain practical values as an approach of real-time analysis in the process of sennoside metabolism.

  14. Investigation of a tubular dual-stator flux-switching permanent-magnet linear generator for free-piston energy converter

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Zheng, Ping; Tong, Chengde; Yu, Bin; Zhu, Shaohong; Zhu, Jianguo

    2015-05-01

    This paper describes a tubular dual-stator flux-switching permanent-magnet (PM) linear generator for free-piston energy converter. The operating principle, topology, and design considerations of the machine are investigated. Combining the motion characteristic of free-piston Stirling engine, a tubular dual-stator PM linear generator is designed by finite element method. Some major structural parameters, such as the outer and inner radii of the mover, PM thickness, mover tooth width, tooth width of the outer and inner stators, etc., are optimized to improve the machine performances like thrust capability and power density. In comparison with conventional single-stator PM machines like moving-magnet linear machine and flux-switching linear machine, the proposed dual-stator flux-switching PM machine shows advantages in higher mass power density, higher volume power density, and lighter mover.

  15. Application of General Regression Neural Network to the Prediction of LOD Change

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao

    2012-01-01

    Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.

  16. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE PAGES

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.; ...

    2016-11-07

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  17. Effect of linear energy on the properties of an AL alloy in DPMIG welding

    NASA Astrophysics Data System (ADS)

    Liao, Tianfa; Jin, Li; Xue, Jiaxiang

    2018-01-01

    The effect of different linear energy parameters on the DPMIG welding performance of AA1060 aluminium alloy is studied in this paper. The stability of the welding process is verified with a Labview electrical signal acquisition system, and the microstructure and tensile properties of the welded joint are studied via optical microscopy, scanning electron microscopy and electrical tensile tests. The test results show that the welding process for the DPMIG methods stable and that the weld beads appear as scales. Tensile strength results indicate that, with increasing linear energy, the tensile strength first increases and then decreases. The tensile strength of the joint is maximized when the linear energy is 120.5 J / mm-1.

  18. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  19. Axial calibration methods of piezoelectric load sharing dynamometer

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Chang, Qingbing; Ren, Zongjin; Shao, Jun; Wang, Xinlei; Tian, Yu

    2018-06-01

    The relationship between input and output of load sharing dynamometer is seriously non-linear in different loading points of a plane, so it's significant for accutately measuring force to precisely calibrate the non-linear relationship. In this paper, firstly, based on piezoelectric load sharing dynamometer, calibration experiments of different loading points are performed in a plane. And then load sharing testing system is respectively calibrated based on BP algorithm and ELM (Extreme Learning Machine) algorithm. Finally, the results show that the calibration result of ELM is better than BP for calibrating the non-linear relationship between input and output of loading sharing dynamometer in the different loading points of a plane, which verifies that ELM algorithm is feasible in solving force non-linear measurement problem.

  20. A Method for Generating Reduced-Order Linear Models of Multidimensional Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy; Hartley, Tom T.

    1998-01-01

    Simulation of high speed propulsion systems may be divided into two categories, nonlinear and linear. The nonlinear simulations are usually based on multidimensional computational fluid dynamics (CFD) methodologies and tend to provide high resolution results that show the fine detail of the flow. Consequently, these simulations are large, numerically intensive, and run much slower than real-time. ne linear simulations are usually based on large lumping techniques that are linearized about a steady-state operating condition. These simplistic models often run at or near real-time but do not always capture the detailed dynamics of the plant. Under a grant sponsored by the NASA Lewis Research Center, Cleveland, Ohio, a new method has been developed that can be used to generate improved linear models for control design from multidimensional steady-state CFD results. This CFD-based linear modeling technique provides a small perturbation model that can be used for control applications and real-time simulations. It is important to note the utility of the modeling procedure; all that is needed to obtain a linear model of the propulsion system is the geometry and steady-state operating conditions from a multidimensional CFD simulation or experiment. This research represents a beginning step in establishing a bridge between the controls discipline and the CFD discipline so that the control engineer is able to effectively use multidimensional CFD results in control system design and analysis.

  1. Scoring and staging systems using cox linear regression modeling and recursive partitioning.

    PubMed

    Lee, J W; Um, S H; Lee, J B; Mun, J; Cho, H

    2006-01-01

    Scoring and staging systems are used to determine the order and class of data according to predictors. Systems used for medical data, such as the Child-Turcotte-Pugh scoring and staging systems for ordering and classifying patients with liver disease, are often derived strictly from physicians' experience and intuition. We construct objective and data-based scoring/staging systems using statistical methods. We consider Cox linear regression modeling and recursive partitioning techniques for censored survival data. In particular, to obtain a target number of stages we propose cross-validation and amalgamation algorithms. We also propose an algorithm for constructing scoring and staging systems by integrating local Cox linear regression models into recursive partitioning, so that we can retain the merits of both methods such as superior predictive accuracy, ease of use, and detection of interactions between predictors. The staging system construction algorithms are compared by cross-validation evaluation of real data. The data-based cross-validation comparison shows that Cox linear regression modeling is somewhat better than recursive partitioning when there are only continuous predictors, while recursive partitioning is better when there are significant categorical predictors. The proposed local Cox linear recursive partitioning has better predictive accuracy than Cox linear modeling and simple recursive partitioning. This study indicates that integrating local linear modeling into recursive partitioning can significantly improve prediction accuracy in constructing scoring and staging systems.

  2. Forecasting volcanic eruptions and other material failure phenomena: An evaluation of the failure forecast method

    NASA Astrophysics Data System (ADS)

    Bell, Andrew F.; Naylor, Mark; Heap, Michael J.; Main, Ian G.

    2011-08-01

    Power-law accelerations in the mean rate of strain, earthquakes and other precursors have been widely reported prior to material failure phenomena, including volcanic eruptions, landslides and laboratory deformation experiments, as predicted by several theoretical models. The Failure Forecast Method (FFM), which linearizes the power-law trend, has been routinely used to forecast the failure time in retrospective analyses; however, its performance has never been formally evaluated. Here we use synthetic and real data, recorded in laboratory brittle creep experiments and at volcanoes, to show that the assumptions of the FFM are inconsistent with the error structure of the data, leading to biased and imprecise forecasts. We show that a Generalized Linear Model method provides higher-quality forecasts that converge more accurately to the eventual failure time, accounting for the appropriate error distributions. This approach should be employed in place of the FFM to provide reliable quantitative forecasts and estimate their associated uncertainties.

  3. Linear and non-linear analyses of Conner's Continuous Performance Test-II discriminate adult patients with attention deficit hyperactivity disorder from patients with mood and anxiety disorders.

    PubMed

    Fasmer, Ole Bernt; Mjeldheim, Kristin; Førland, Wenche; Hansen, Anita L; Syrstad, Vigdis Elin Giæver; Oedegaard, Ketil J; Berle, Jan Øystein

    2016-08-11

    Attention Deficit Hyperactivity Disorder (ADHD) is a heterogeneous disorder. Therefore it is important to look for factors that can contribute to better diagnosis and classification of these patients. The aims of the study were to characterize adult psychiatric out-patients with a mixture of mood, anxiety and attentional problems using an objective neuropsychological test of attention combined with an assessment of mood instability. Newly referred patients (n = 99; aged 18-65 years) requiring diagnostic evaluation of ADHD, mood or anxiety disorders were recruited, and were given a comprehensive diagnostic evaluation including the self-report form of the cyclothymic temperament scale and Conner's Continuous Performance Test II (CPT-II). In addition to the traditional measures from this test we have extracted raw data and analysed time series using linear and non-linear mathematical methods. Fifty patients fulfilled criteria for ADHD, while 49 did not, and were given other psychiatric diagnoses (clinical controls). When compared to the clinical controls the ADHD patients had more omission and commission errors, and higher reaction time variability. Analyses of response times showed higher values for skewness in the ADHD patients, and lower values for sample entropy and symbolic dynamics. Among the ADHD patients 59 % fulfilled criteria for a cyclothymic temperament, and this group had higher reaction time variability and lower scores on complexity than the group without this temperament. The CPT-II is a useful instrument in the assessment of ADHD in adult patients. Additional information from this test was obtained by analyzing response times using linear and non-linear methods, and this showed that ADHD patients with a cyclothymic temperament were different from those without this temperament.

  4. Interpreting linear support vector machine models with heat map molecule coloring

    PubMed Central

    2011-01-01

    Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031

  5. Nonlinear Thermal Instability in Compressible Viscous Flows Without Heat Conductivity

    NASA Astrophysics Data System (ADS)

    Jiang, Fei

    2018-04-01

    We investigate the thermal instability of a smooth equilibrium state, in which the density function satisfies Schwarzschild's (instability) condition, to a compressible heat-conducting viscous flow without heat conductivity in the presence of a uniform gravitational field in a three-dimensional bounded domain. We show that the equilibrium state is linearly unstable by a modified variational method. Then, based on the constructed linearly unstable solutions and a local well-posedness result of classical solutions to the original nonlinear problem, we further construct the initial data of linearly unstable solutions to be the one of the original nonlinear problem, and establish an appropriate energy estimate of Gronwall-type. With the help of the established energy estimate, we finally show that the equilibrium state is nonlinearly unstable in the sense of Hadamard by a careful bootstrap instability argument.

  6. Typical Werner states satisfying all linear Bell inequalities with dichotomic measurements

    NASA Astrophysics Data System (ADS)

    Luo, Ming-Xing

    2018-04-01

    Quantum entanglement as a special resource inspires various distinct applications in quantum information processing. Unfortunately, it is NP-hard to detect general quantum entanglement using Bell testing. Our goal is to investigate quantum entanglement with white noises that appear frequently in experiment and quantum simulations. Surprisingly, for almost all multipartite generalized Greenberger-Horne-Zeilinger states there are entangled noisy states that satisfy all linear Bell inequalities consisting of full correlations with dichotomic inputs and outputs of each local observer. This result shows generic undetectability of mixed entangled states in contrast to Gisin's theorem of pure bipartite entangled states in terms of Bell nonlocality. We further provide an accessible method to show a nontrivial set of noisy entanglement with small number of parties satisfying all general linear Bell inequalities. These results imply typical incompleteness of special Bell theory in explaining entanglement.

  7. A simplified method for power-law modelling of metabolic pathways from time-course data and steady-state flux profiles.

    PubMed

    Kitayama, Tomoya; Kinoshita, Ayako; Sugimoto, Masahiro; Nakayama, Yoichi; Tomita, Masaru

    2006-07-17

    In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.

  8. A partially penalty immersed Crouzeix-Raviart finite element method for interface problems.

    PubMed

    An, Na; Yu, Xijun; Chen, Huanzhen; Huang, Chaobao; Liu, Zhongyan

    2017-01-01

    The elliptic equations with discontinuous coefficients are often used to describe the problems of the multiple materials or fluids with different densities or conductivities or diffusivities. In this paper we develop a partially penalty immersed finite element (PIFE) method on triangular grids for anisotropic flow models, in which the diffusion coefficient is a piecewise definite-positive matrix. The standard linear Crouzeix-Raviart type finite element space is used on non-interface elements and the piecewise linear Crouzeix-Raviart type immersed finite element (IFE) space is constructed on interface elements. The piecewise linear functions satisfying the interface jump conditions are uniquely determined by the integral averages on the edges as degrees of freedom. The PIFE scheme is given based on the symmetric, nonsymmetric or incomplete interior penalty discontinuous Galerkin formulation. The solvability of the method is proved and the optimal error estimates in the energy norm are obtained. Numerical experiments are presented to confirm our theoretical analysis and show that the newly developed PIFE method has optimal-order convergence in the [Formula: see text] norm as well. In addition, numerical examples also indicate that this method is valid for both the isotropic and the anisotropic elliptic interface problems.

  9. Polymeric mercaptosilane-modified platinum electrodes for elimination of interferants in glucose biosensors.

    PubMed

    Jung, S K; Wilson, G S

    1996-02-15

    An oxidase-based glucose sensor has been developed that uses a mercaptosilane-modified platinum electrode to achieve selectivity of electrochemical interferants. A platinum-iridium (9:1) wire (0.178 mm o.d., sensing area of 1.12 mm2) is modified with (3-mercaptopropyl)trimethoxysilane. The modified sensors show excellent operational stability for more than 5 days. Glucose oxidase is immobilized on the modified surface (i) by using 3-maleimidopropionic acid as a linker or (ii) by cross-liking with bovine serum albumin using glutaraldehyde. Sensitivities in the range of 9.97 nA/mM glucose are observed when the enzyme is immobilized by method ii. Lower sensitivities (1.13 x 10(-1) nA/mM glucose) are observed when immobilization method i is employed. In terms of linear response range, the sensor enzyme-immobilized by method i is superior to that immobilized by method ii. The linearity is improved upon coating the enzyme layer with polyurethane. The sensor immobilized by method ii and coated with polyurethane exhibits a linear range to 15 mM glucose and excellent selectivity to glucose (0.47 nA/mM) against interferants such as ascorbic acid, uric acid, and acetaminophen.

  10. DOA Finding with Support Vector Regression Based Forward-Backward Linear Prediction.

    PubMed

    Pan, Jingjing; Wang, Yide; Le Bastard, Cédric; Wang, Tianzhen

    2017-05-27

    Direction-of-arrival (DOA) estimation has drawn considerable attention in array signal processing, particularly with coherent signals and a limited number of snapshots. Forward-backward linear prediction (FBLP) is able to directly deal with coherent signals. Support vector regression (SVR) is robust with small samples. This paper proposes the combination of the advantages of FBLP and SVR in the estimation of DOAs of coherent incoming signals with low snapshots. The performance of the proposed method is validated with numerical simulations in coherent scenarios, in terms of different angle separations, numbers of snapshots, and signal-to-noise ratios (SNRs). Simulation results show the effectiveness of the proposed method.

  11. Detecting multiple outliers in linear functional relationship model for circular variables using clustering technique

    NASA Astrophysics Data System (ADS)

    Mokhtar, Nurkhairany Amyra; Zubairi, Yong Zulina; Hussin, Abdul Ghapor

    2017-05-01

    Outlier detection has been used extensively in data analysis to detect anomalous observation in data and has important application in fraud detection and robust analysis. In this paper, we propose a method in detecting multiple outliers for circular variables in linear functional relationship model. Using the residual values of the Caires and Wyatt model, we applied the hierarchical clustering procedure. With the use of tree diagram, we illustrate the graphical approach of the detection of outlier. A simulation study is done to verify the accuracy of the proposed method. Also, an illustration to a real data set is given to show its practical applicability.

  12. Stress Induced in Periodontal Ligament under Orthodontic Loading (Part II): A Comparison of Linear Versus Non-Linear Fem Study.

    PubMed

    Hemanth, M; Deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B

    2015-09-01

    Simulation of periodontal ligament (PDL) using non-linear finite element method (FEM) analysis gives better insight into understanding of the biology of tooth movement. The stresses in the PDL were evaluated for intrusion and lingual root torque using non-linear properties. A three-dimensional (3D) FEM model of the maxillary incisors was generated using Solidworks modeling software. Stresses in the PDL were evaluated for intrusive and lingual root torque movements by 3D FEM using ANSYS software. These stresses were compared with linear and non-linear analyses. For intrusive and lingual root torque movements, distribution of stress over the PDL was within the range of optimal stress value as proposed by Lee, but was exceeding the force system given by Proffit as optimum forces for orthodontic tooth movement with linear properties. When same force load was applied in non-linear analysis, stresses were more compared to linear analysis and were beyond the optimal stress range as proposed by Lee for both intrusive and lingual root torque. To get the same stress as linear analysis, iterations were done using non-linear properties and the force level was reduced. This shows that the force level required for non-linear analysis is lesser than that of linear analysis.

  13. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  14. Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules

    1999-01-01

    In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.

  15. Reanalysis of cancer mortality in Japanese A-bomb survivors exposed to low doses of radiation: bootstrap and simulation methods

    PubMed Central

    2009-01-01

    Background The International Commission on Radiological Protection (ICRP) recommended annual occupational dose limit is 20 mSv. Cancer mortality in Japanese A-bomb survivors exposed to less than 20 mSv external radiation in 1945 was analysed previously, using a latency model with non-linear dose response. Questions were raised regarding statistical inference with this model. Methods Cancers with over 100 deaths in the 0 - 20 mSv subcohort of the 1950-1990 Life Span Study are analysed with Poisson regression models incorporating latency, allowing linear and non-linear dose response. Bootstrap percentile and Bias-corrected accelerated (BCa) methods and simulation of the Likelihood Ratio Test lead to Confidence Intervals for Excess Relative Risk (ERR) and tests against the linear model. Results The linear model shows significant large, positive values of ERR for liver and urinary cancers at latencies from 37 - 43 years. Dose response below 20 mSv is strongly non-linear at the optimal latencies for the stomach (11.89 years), liver (36.9), lung (13.6), leukaemia (23.66), and pancreas (11.86) and across broad latency ranges. Confidence Intervals for ERR are comparable using Bootstrap and Likelihood Ratio Test methods and BCa 95% Confidence Intervals are strictly positive across latency ranges for all 5 cancers. Similar risk estimates for 10 mSv (lagged dose) are obtained from the 0 - 20 mSv and 5 - 500 mSv data for the stomach, liver, lung and leukaemia. Dose response for the latter 3 cancers is significantly non-linear in the 5 - 500 mSv range. Conclusion Liver and urinary cancer mortality risk is significantly raised using a latency model with linear dose response. A non-linear model is strongly superior for the stomach, liver, lung, pancreas and leukaemia. Bootstrap and Likelihood-based confidence intervals are broadly comparable and ERR is strictly positive by bootstrap methods for all 5 cancers. Except for the pancreas, similar estimates of latency and risk from 10 mSv are obtained from the 0 - 20 mSv and 5 - 500 mSv subcohorts. Large and significant cancer risks for Japanese survivors exposed to less than 20 mSv external radiation from the atomic bombs in 1945 cast doubt on the ICRP recommended annual occupational dose limit. PMID:20003238

  16. A composite step conjugate gradients squared algorithm for solving nonsymmetric linear systems

    NASA Astrophysics Data System (ADS)

    Chan, Tony; Szeto, Tedd

    1994-03-01

    We propose a new and more stable variant of the CGS method [27] for solving nonsymmetric linear systems. The method is based on squaring the Composite Step BCG method, introduced recently by Bank and Chan [1,2], which itself is a stabilized variant of BCG in that it skips over steps for which the BCG iterate is not defined and causes one kind of breakdown in BCG. By doing this, we obtain a method (Composite Step CGS or CSCGS) which not only handles the breakdowns described above, but does so with the advantages of CGS, namely, no multiplications by the transpose matrix and a faster convergence rate than BCG. Our strategy for deciding whether to skip a step does not involve any machine dependent parameters and is designed to skip near breakdowns as well as produce smoother iterates. Numerical experiments show that the new method does produce improved performance over CGS on practical problems.

  17. [Determination of protein by CdS quantum dot fluorometry].

    PubMed

    Hu, Wei-Ping; Jiao, Man; Dong, Xue-Zhi; Wang, Xin

    2011-02-01

    Determination of protein content by fluorometry was carried out. In this experiment, CdS quantum dots (QDs) that have special spectral properties were prepared with sodium hexametaphosphate as stabilizer and mercapto acetic acid as modifier by hydrothermal synthesis method. Based on the increase in fluorescence intensity after CdS reacted with bovine serum albumin (BSA), a new method for the determination of protein was established. Results show that the fluorescence intensity of system has a good linear relationship with the concentration of BSA in the range of 0.001 43-0.250 mg x mL(-1), and the linear equation was F = 5 444.301 03 + 43.327 39c, relation coefficient (r) was 0.996 6, the limit of detection was 0.001 4 mg x mL(-1). The method has been used for the determination of protein in milk and egg, and compared with the standard method (biuret method), and the results were satisfactory.

  18. Simulation Research on Vehicle Active Suspension Controller Based on G1 Method

    NASA Astrophysics Data System (ADS)

    Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui

    2017-09-01

    Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.

  19. Homotopy perturbation method with Laplace Transform (LT-HPM) for solving Lane-Emden type differential equations (LETDEs).

    PubMed

    Tripathi, Rajnee; Mishra, Hradyesh Kumar

    2016-01-01

    In this communication, we describe the Homotopy Perturbation Method with Laplace Transform (LT-HPM), which is used to solve the Lane-Emden type differential equations. It's very difficult to solve numerically the Lane-Emden types of the differential equation. Here we implemented this method for two linear homogeneous, two linear nonhomogeneous, and four nonlinear homogeneous Lane-Emden type differential equations and use their appropriate comparisons with exact solutions. In the current study, some examples are better than other existing methods with their nearer results in the form of power series. The Laplace transform used to accelerate the convergence of power series and the results are shown in the tables and graphs which have good agreement with the other existing method in the literature. The results show that LT-HPM is very effective and easy to implement.

  20. Graph-based linear scaling electronic structure theory.

    PubMed

    Niklasson, Anders M N; Mniszewski, Susan M; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Rubensson, Emanuel H; Djidjev, Hristo

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  1. Graph-based linear scaling electronic structure theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  2. Can fractal methods applied to video tracking detect the effects of deltamethrin pesticide or mercury on the locomotion behavior of shrimps?

    PubMed

    Tenorio, Bruno Mendes; da Silva Filho, Eurípedes Alves; Neiva, Gentileza Santos Martins; da Silva, Valdemiro Amaro; Tenorio, Fernanda das Chagas Angelo Mendes; da Silva, Themis de Jesus; Silva, Emerson Carlos Soares E; Nogueira, Romildo de Albuquerque

    2017-08-01

    Shrimps can accumulate environmental toxicants and suffer behavioral changes. However, methods to quantitatively detect changes in the behavior of these shrimps are still needed. The present study aims to verify whether mathematical and fractal methods applied to video tracking can adequately describe changes in the locomotion behavior of shrimps exposed to low concentrations of toxic chemicals, such as 0.15µgL -1 deltamethrin pesticide or 10µgL -1 mercuric chloride. Results showed no change after 1min, 4, 24, and 48h of treatment. However, after 72 and 96h of treatment, both the linear methods describing the track length, mean speed, mean distance from the current to the previous track point, as well as the non-linear methods of fractal dimension (box counting or information entropy) and multifractal analysis were able to detect changes in the locomotion behavior of shrimps exposed to deltamethrin. Analysis of angular parameters of the track points vectors and lacunarity were not sensitive to those changes. None of the methods showed adverse effects to mercury exposure. These mathematical and fractal methods applicable to software represent low cost useful tools in the toxicological analyses of shrimps for quality of food, water and biomonitoring of ecosystems. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Reconstructing the Initial Density Field of the Local Universe: Methods and Tests with Mock Catalogs

    NASA Astrophysics Data System (ADS)

    Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; van den Bosch, Frank C.

    2013-07-01

    Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3 \\lesssim \\rho /\\bar{\\rho } \\lesssim 20 without any significant bias. In particular, the Fourier phases of the resimulated density fields are tightly correlated with those of the original simulation down to a scale corresponding to a wavenumber of ~1 h Mpc-1, much smaller than the translinear scale, which corresponds to a wavenumber of ~0.15 h Mpc-1.

  4. HCPCF-based in-line fiber Fabry-Perot refractometer and high sensitivity signal processing method

    NASA Astrophysics Data System (ADS)

    Liu, Xiaohui; Jiang, Mingshun; Sui, Qingmei; Geng, Xiangyi; Song, Furong

    2017-12-01

    An in-line fiber Fabry-Perot interferometer (FPI) based on the hollow-core photonic crystal fiber (HCPCF) for refractive index (RI) measurement is proposed in this paper. The FPI is formed by splicing both ends of a short section of the HCPCF to single mode fibers (SMFs) and cleaving the SMF pigtail to a proper length. The RI response of the sensor is analyzed theoretically and demonstrated experimentally. The results show that the FPI sensor has linear response to external RI and good repeatability. The sensitivity calculated from the maximum fringe contrast is -136 dB/RIU. A new spectrum differential integration (SDI) method for signal processing is also presented in this study. In this method, the RI is obtained from the integrated intensity of the absolute difference between the interference spectrum and its smoothed spectrum. The results show that the sensitivity obtained from the integrated intensity is about -1.34×105 dB/RIU. Compared with the maximum fringe contrast method, the new SDI method can provide the higher sensitivity, better linearity, improved reliability, and accuracy, and it's also convenient for automatic and fast signal processing in real-time monitoring of RI.

  5. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  6. Effect of Facet Displacement on Radiation Field and Its Application for Panel Adjustment of Large Reflector Antenna

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Lian, Peiyuan; Zhang, Shuxin; Xiang, Binbin; Xu, Qian

    2017-05-01

    Large reflector antennas are widely used in radars, satellite communication, radio astronomy, and so on. The rapid developments in these fields have created demands for development of better performance and higher surface accuracy. However, low accuracy and low efficiency are the common disadvantages for traditional panel alignment and adjustment. In order to improve the surface accuracy of large reflector antenna, a new method is presented to determinate panel adjustment values from far field pattern. Based on the method of Physical Optics (PO), the effect of panel facet displacement on radiation field value is derived. Then the linear system is constructed between panel adjustment vector and far field pattern. Using the method of Singular Value Decomposition (SVD), the adjustment value for all panel adjustors are obtained by solving the linear equations. An experiment is conducted on a 3.7 m reflector antenna with 12 segmented panels. The results of simulation and test are similar, which shows that the presented method is feasible. Moreover, the discussion about validation shows that the method can be used for many cases of reflector shape. The proposed research provides the instruction to adjust surface panels efficiently and accurately.

  7. Linear dynamic coupling in geared rotor systems

    NASA Technical Reports Server (NTRS)

    David, J. W.; Mitchell, L. D.

    1986-01-01

    The effects of high frequency oscillations caused by the gear mesh, on components of a geared system that can be modeled as rigid discs are analyzed using linear dynamic coupling terms. The coupled, nonlinear equations of motion for a disc attached to a rotating shaft are presented. The results of a trial problem analysis show that the inclusion of the linear dynamic coupling terms can produce significant changes in the predicted response of geared rotor systems, and that the produced sideband responses are greater than the unbalanced response. The method is useful in designing gear drives for heavy-lift helicopters, industrial speed reducers, naval propulsion systems, and heavy off-road equipment.

  8. The research of radar target tracking observed information linear filter method

    NASA Astrophysics Data System (ADS)

    Chen, Zheng; Zhao, Xuanzhi; Zhang, Wen

    2018-05-01

    Aiming at the problems of low precision or even precision divergent is caused by nonlinear observation equation in radar target tracking, a new filtering algorithm is proposed in this paper. In this algorithm, local linearization is carried out on the observed data of the distance and angle respectively. Then the kalman filter is performed on the linearized data. After getting filtered data, a mapping operation will provide the posteriori estimation of target state. A large number of simulation results show that this algorithm can solve above problems effectively, and performance is better than the traditional filtering algorithm for nonlinear dynamic systems.

  9. [Study on the 3D mathematical mode of the muscle groups applied to human mandible by a linear programming method].

    PubMed

    Wang, Dongmei; Yu, Liniu; Zhou, Xianlian; Wang, Chengtao

    2004-02-01

    Four types of 3D mathematical mode of the muscle groups applied to the human mandible have been developed. One is based on electromyography (EMG) and the others are based on linear programming with different objective function. Each model contains 26 muscle forces and two joint forces, allowing simulation of static bite forces and concomitant joint reaction forces for various bite point locations and mandibular positions. In this paper, the method of image processing to measure the position and direction of muscle forces according to 3D CAD model was built with CT data. Matlab optimization toolbox is applied to solve the three modes based on linear programming. Results show that the model with an objective function requiring a minimum sum of the tensions in the muscles is reasonable and agrees very well with the normal physiology activity.

  10. GPU implementation of the linear scaling three dimensional fragment method for large scale electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Jia, Weile; Wang, Jue; Chi, Xuebin; Wang, Lin-Wang

    2017-02-01

    LS3DF, namely linear scaling three-dimensional fragment method, is an efficient linear scaling ab initio total energy electronic structure calculation code based on a divide-and-conquer strategy. In this paper, we present our GPU implementation of the LS3DF code. Our test results show that the GPU code can calculate systems with about ten thousand atoms fully self-consistently in the order of 10 min using thousands of computing nodes. This makes the electronic structure calculations of 10,000-atom nanosystems routine work. This speed is 4.5-6 times faster than the CPU calculations using the same number of nodes on the Titan machine in the Oak Ridge leadership computing facility (OLCF). Such speedup is achieved by (a) carefully re-designing of the computationally heavy kernels; (b) redesign of the communication pattern for heterogeneous supercomputers.

  11. Transfer Alignment Error Compensator Design Based on Robust State Estimation

    NASA Astrophysics Data System (ADS)

    Lyou, Joon; Lim, You-Chol

    This paper examines the transfer alignment problem of the StrapDown Inertial Navigation System (SDINS), which is subject to the ship’s roll and pitch. Major error sources for velocity and attitude matching are lever arm effect, measurement time delay and ship-body flexure. To reduce these alignment errors, an error compensation method based on state augmentation and robust state estimation is devised. A linearized error model for the velocity and attitude matching transfer alignment system is derived first by linearizing the nonlinear measurement equation with respect to its time delay and dominant Y-axis flexure, and by augmenting the delay state and flexure state into conventional linear state equations. Then an H∞ filter is introduced to account for modeling uncertainties of time delay and the ship-body flexure. The simulation results show that this method considerably decreases azimuth alignment errors considerably.

  12. Factorizing the factorization - a spectral-element solver for elliptic equations with linear operation count

    NASA Astrophysics Data System (ADS)

    Huismann, Immo; Stiller, Jörg; Fröhlich, Jochen

    2017-10-01

    The paper proposes a novel factorization technique for static condensation of a spectral-element discretization matrix that yields a linear operation count of just 13N multiplications for the residual evaluation, where N is the total number of unknowns. In comparison to previous work it saves a factor larger than 3 and outpaces unfactored variants for all polynomial degrees. Using the new technique as a building block for a preconditioned conjugate gradient method yields linear scaling of the runtime with N which is demonstrated for polynomial degrees from 2 to 32. This makes the spectral-element method cost effective even for low polynomial degrees. Moreover, the dependence of the iterative solution on the element aspect ratio is addressed, showing only a slight increase in the number of iterations for aspect ratios up to 128. Hence, the solver is very robust for practical applications.

  13. Ultraprecision XY stage using a hybrid bolt-clamped Langevin-type ultrasonic linear motor for continuous motion.

    PubMed

    Lee, Dong-Jin; Lee, Sun-Kyu

    2015-01-01

    This paper presents a design and control system for an XY stage driven by an ultrasonic linear motor. In this study, a hybrid bolt-clamped Langevin-type ultrasonic linear motor was manufactured and then operated at the resonance frequency of the third longitudinal and the sixth lateral modes. These two modes were matched through the preload adjustment and precisely tuned by the frequency matching method based on the impedance matching method with consideration of the different moving weights. The XY stage was evaluated in terms of position and circular motion. To achieve both fine and stable motion, the controller consisted of a nominal characteristics trajectory following (NCTF) control for continuous motion, dead zone compensation, and a switching controller based on the different NCTFs for the macro- and micro-dynamics regimes. The experimental results showed that the developed stage enables positioning and continuous motion with nanometer-level accuracy.

  14. Multiview Locally Linear Embedding for Effective Medical Image Retrieval

    PubMed Central

    Shen, Hualei; Tao, Dacheng; Ma, Dianfu

    2013-01-01

    Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE), principal component analysis (PCA), or laplacian eigenmaps (LE) can be employed to reduce the “curse of dimensionality”. Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE) for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods. PMID:24349277

  15. Blood Density Is Nearly Equal to Water Density: A Validation Study of the Gravimetric Method of Measuring Intraoperative Blood Loss

    PubMed Central

    Vitello, Dominic J.; Ripper, Richard M.; Fettiplace, Michael R.; Weinberg, Guy L.; Vitello, Joseph M.

    2015-01-01

    Purpose. The gravimetric method of weighing surgical sponges is used to quantify intraoperative blood loss. The dry mass minus the wet mass of the gauze equals the volume of blood lost. This method assumes that the density of blood is equivalent to water (1 gm/mL). This study's purpose was to validate the assumption that the density of blood is equivalent to water and to correlate density with hematocrit. Methods. 50 µL of whole blood was weighed from eighteen rats. A distilled water control was weighed for each blood sample. The averages of the blood and water were compared utilizing a Student's unpaired, one-tailed t-test. The masses of the blood samples and the hematocrits were compared using a linear regression. Results. The average mass of the eighteen blood samples was 0.0489 g and that of the distilled water controls was 0.0492 g. The t-test showed P = 0.2269 and R 2 = 0.03154. The hematocrit values ranged from 24% to 48%. The linear regression R 2 value was 0.1767. Conclusions. The R 2 value comparing the blood and distilled water masses suggests high correlation between the two populations. Linear regression showed the hematocrit was not proportional to the mass of the blood. The study confirmed that the measured density of blood is similar to water. PMID:26464949

  16. An extended linear scaling method for downscaling temperature and its implication in the Jhelum River basin, Pakistan, and India, using CMIP5 GCMs

    NASA Astrophysics Data System (ADS)

    Mahmood, Rashid; JIA, Shaofeng

    2017-11-01

    In this study, the linear scaling method used for the downscaling of temperature was extended from monthly scaling factors to daily scaling factors (SFs) to improve the daily variations in the corrected temperature. In the original linear scaling (OLS), mean monthly SFs are used to correct the future data, but mean daily SFs are used to correct the future data in the extended linear scaling (ELS) method. The proposed method was evaluated in the Jhelum River basin for the period 1986-2000, using the observed maximum temperature (Tmax) and minimum temperature (Tmin) of 18 climate stations and the simulated Tmax and Tmin of five global climate models (GCMs) (GFDL-ESM2G, NorESM1-ME, HadGEM2-ES, MIROC5, and CanESM2), and the method was also compared with OLS to observe the improvement. Before the evaluation of ELS, these GCMs were also evaluated using their raw data against the observed data for the same period (1986-2000). Four statistical indicators, i.e., error in mean, error in standard deviation, root mean square error, and correlation coefficient, were used for the evaluation process. The evaluation results with GCMs' raw data showed that GFDL-ESM2G and MIROC5 performed better than other GCMs according to all the indicators but with unsatisfactory results that confine their direct application in the basin. Nevertheless, after the correction with ELS, a noticeable improvement was observed in all the indicators except correlation coefficient because this method only adjusts (corrects) the magnitude. It was also noticed that the daily variations of the observed data were better captured by the corrected data with ELS than OLS. Finally, the ELS method was applied for the downscaling of five GCMs' Tmax and Tmin for the period of 2041-2070 under RCP8.5 in the Jhelum basin. The results showed that the basin would face hotter climate in the future relative to the present climate, which may result in increasing water requirements in public, industrial, and agriculture sectors; change in the hydrological cycle and monsoon pattern; and lack of glaciers in the basin.

  17. Field measurements of the linear and nonlinear shear moduli of cemented alluvium using dynamically loaded surface footings

    NASA Astrophysics Data System (ADS)

    Park, Kwangsoo

    In this dissertation, a research effort aimed at development and implementation of a direct field test method to evaluate the linear and nonlinear shear modulus of soil is presented. The field method utilizes a surface footing that is dynamically loaded horizontally. The test procedure involves applying static and dynamic loads to the surface footing and measuring the soil response beneath the loaded area using embedded geophones. A wide range in dynamic loads under a constant static load permits measurements of linear and nonlinear shear wave propagation from which shear moduli and associated shearing strains are evaluated. Shear wave velocities in the linear and nonlinear strain ranges are calculated from time delays in waveforms monitored by geophone pairs. Shear moduli are then obtained using the shear wave velocities and the mass density of a soil. Shear strains are determined using particle displacements calculated from particle velocities measured at the geophones by assuming a linear variation between geophone pairs. The field test method was validated by conducting an initial field experiment at sandy site in Austin, Texas. Then, field experiments were performed on cemented alluvium, a complex, hard-to-sample material. Three separate locations at Yucca Mountain, Nevada were tested. The tests successfully measured: (1) the effect of confining pressure on shear and compression moduli in the linear strain range and (2) the effect of strain on shear moduli at various states of stress in the field. The field measurements were first compared with empirical relationships for uncemented gravel. This comparison showed that the alluvium was clearly cemented. The field measurements were then compared to other independent measurements including laboratory resonant column tests and field seismic tests using the spectral-analysis-of-surface-waves method. The results from the field tests were generally in good agreement with the other independent test results, indicating that the proposed method has the ability to directly evaluate complex material like cemented alluvium in the field.

  18. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.

  19. Finding Dantzig Selectors with a Proximity Operator based Fixed-point Algorithm

    DTIC Science & Technology

    2014-11-01

    experiments showed that this method usually outperforms the method in [2] in terms of CPU time while producing solutions of comparable quality. The... method proposed in [19]. To alleviate the difficulty caused by the subprob- lem without a closed form solution , a linearized ADM was proposed for the...a closed form solution , but the β-related subproblem does not and is solved approximately by using the nonmonotone gradient method in [18]. The

  20. The novel application of artificial neural network on bioelectrical impedance analysis to assess the body composition in elderly

    PubMed Central

    2013-01-01

    Background This study aims to improve accuracy of Bioelectrical Impedance Analysis (BIA) prediction equations for estimating fat free mass (FFM) of the elderly by using non-linear Back Propagation Artificial Neural Network (BP-ANN) model and to compare the predictive accuracy with the linear regression model by using energy dual X-ray absorptiometry (DXA) as reference method. Methods A total of 88 Taiwanese elderly adults were recruited in this study as subjects. Linear regression equations and BP-ANN prediction equation were developed using impedances and other anthropometrics for predicting the reference FFM measured by DXA (FFMDXA) in 36 male and 26 female Taiwanese elderly adults. The FFM estimated by BIA prediction equations using traditional linear regression model (FFMLR) and BP-ANN model (FFMANN) were compared to the FFMDXA. The measuring results of an additional 26 elderly adults were used to validate than accuracy of the predictive models. Results The results showed the significant predictors were impedance, gender, age, height and weight in developed FFMLR linear model (LR) for predicting FFM (coefficient of determination, r2 = 0.940; standard error of estimate (SEE) = 2.729 kg; root mean square error (RMSE) = 2.571kg, P < 0.001). The above predictors were set as the variables of the input layer by using five neurons in the BP-ANN model (r2 = 0.987 with a SD = 1.192 kg and relatively lower RMSE = 1.183 kg), which had greater (improved) accuracy for estimating FFM when compared with linear model. The results showed a better agreement existed between FFMANN and FFMDXA than that between FFMLR and FFMDXA. Conclusion When compared the performance of developed prediction equations for estimating reference FFMDXA, the linear model has lower r2 with a larger SD in predictive results than that of BP-ANN model, which indicated ANN model is more suitable for estimating FFM. PMID:23388042

  1. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  2. Design of linear quadratic regulator (LQR) control system for flight stability of LSU-05

    NASA Astrophysics Data System (ADS)

    Purnawan, Heri; Mardlijah; Budi Purwanto, Eko

    2017-09-01

    Lapan Surveillance UAV-05 (LSU-05) is an unmanned aerial vehicle designed to cruise time in 6 hours and cruise velocity about 30 m/s. Mission of LSU-05 is surveillance for researchs and observations such as traffics and disaster investigations. This paper aims to design a control system on the LSU-05 to fly steadily. The methods used to stabilize LSU-05 is Linear Quadratic Regulator (LQR). Based on LQR controller, there is obtained transient response for longitudinal motion, td = 0.221s, tr = 0.419s, ts = 0.719s, tp = 1.359s, and Mp = 0%. In other hand, transient response for lateral-directional motion showed that td = 0.186s, tr = 0.515s, ts = 0.87s, tp = 2.02s, and Mp = 0%. The result of simulation showed a good performance for this method.

  3. A method to stabilize linear systems using eigenvalue gradient information

    NASA Technical Reports Server (NTRS)

    Wieseman, C. D.

    1985-01-01

    Formal optimization methods and eigenvalue gradient information are used to develop a stabilizing control law for a closed loop linear system that is initially unstable. The method was originally formulated by using direct, constrained optimization methods with the constraints being the real parts of the eigenvalues. However, because of problems in trying to achieve stabilizing control laws, the problem was reformulated to be solved differently. The method described uses the Davidon-Fletcher-Powell minimization technique to solve an indirect, constrained minimization problem in which the performance index is the Kreisselmeier-Steinhauser function of the real parts of all the eigenvalues. The method is applied successfully to solve two different problems: the determination of a fourth-order control law stabilizes a single-input single-output active flutter suppression system and the determination of a second-order control law for a multi-input multi-output lateral-directional flight control system. Various sets of design variables and initial starting points were chosen to show the robustness of the method.

  4. Validated spectrofluorimetric method for the determination of tamsulosin in spiked human urine, pure and pharmaceutical preparations.

    PubMed

    Karasakal, A; Ulu, S T

    2014-05-01

    A novel, sensitive and selective spectrofluorimetric method was developed for the determination of tamsulosin in spiked human urine and pharmaceutical preparations. The proposed method is based on the reaction of tamsulosin with 1-dimethylaminonaphthalene-5-sulfonyl chloride in carbonate buffer pH 10.5 to yield a highly fluorescent derivative. The described method was validated and the analytical parameters of linearity, limit of detection (LOD), limit of quantification (LOQ), accuracy, precision, recovery and robustness were evaluated. The proposed method showed a linear dependence of the fluorescence intensity on drug concentration over the range 1.22 × 10(-7) to 7.35 × 10(-6)  M. LOD and LOQ were calculated as 1.07 × 10(-7) and 3.23 × 10(-7)  M, respectively. The proposed method was successfully applied for the determination of tamsulosin in pharmaceutical preparations and the obtained results were in good agreement with those obtained using the reference method. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Tri-linear interpolation-based cerebral white matter fiber imaging

    PubMed Central

    Jiang, Shan; Zhang, Pengfei; Han, Tong; Liu, Weihua; Liu, Meixia

    2013-01-01

    Diffusion tensor imaging is a unique method to visualize white matter fibers three-dimensionally, non-invasively and in vivo, and therefore it is an important tool for observing and researching neural regeneration. Different diffusion tensor imaging-based fiber tracking methods have been already investigated, but making the computing faster, fiber tracking longer and smoother and the details shown clearer are needed to be improved for clinical applications. This study proposed a new fiber tracking strategy based on tri-linear interpolation. We selected a patient with acute infarction of the right basal ganglia and designed experiments based on either the tri-linear interpolation algorithm or tensorline algorithm. Fiber tracking in the same regions of interest (genu of the corpus callosum) was performed separately. The validity of the tri-linear interpolation algorithm was verified by quantitative analysis, and its feasibility in clinical diagnosis was confirmed by the contrast between tracking results and the disease condition of the patient as well as the actual brain anatomy. Statistical results showed that the maximum length and average length of the white matter fibers tracked by the tri-linear interpolation algorithm were significantly longer. The tracking images of the fibers indicated that this method can obtain smoother tracked fibers, more obvious orientation and clearer details. Tracking fiber abnormalities are in good agreement with the actual condition of patients, and tracking displayed fibers that passed though the corpus callosum, which was consistent with the anatomical structures of the brain. Therefore, the tri-linear interpolation algorithm can achieve a clear, anatomically correct and reliable tracking result. PMID:25206524

  6. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets

    PubMed Central

    Xiao, Xun; Geyer, Veikko F.; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F.

    2016-01-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582

  7. A compact finite element method for elastic bodies

    NASA Technical Reports Server (NTRS)

    Rose, M. E.

    1984-01-01

    A nonconforming finite method is described for treating linear equilibrium problems, and a convergence proof showing second order accuracy is given. The close relationship to a related compact finite difference scheme due to Phillips and Rose is examined. A condensation technique is shown to preserve the compactness property and suggests an approach to a certain type of homogenization.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Ch.; Gao, X. W.; Sladek, J.

    This paper reports our recent research works on crack analysis in continuously non-homogeneous and linear elastic functionally graded materials. A meshless boundary element method is developed for this purpose. Numerical examples are presented and discussed to demonstrate the efficiency and the accuracy of the present numerical method, and to show the effects of the material gradation on the crack-opening-displacements and the stress intensity factors.

  9. Fourth order Douglas implicit scheme for solving three dimension reaction diffusion equation with non-linear source term

    NASA Astrophysics Data System (ADS)

    Hasnain, Shahid; Saqib, Muhammad; Mashat, Daoud Suleiman

    2017-07-01

    This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit) to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.

  10. Quinary excitation method for pulse compression ultrasound measurements.

    PubMed

    Cowell, D M J; Freear, S

    2008-04-01

    A novel switched excitation method for linear frequency modulated excitation of ultrasonic transducers in pulse compression systems is presented that is simple to realise, yet provides reduced signal sidelobes at the output of the matched filter compared to bipolar pseudo-chirp excitation. Pulse compression signal sidelobes are reduced through the use of simple amplitude tapering at the beginning and end of the excitation duration. Amplitude tapering using switched excitation is realised through the use of intermediate voltage switching levels, half that of the main excitation voltages. In total five excitation voltages are used creating a quinary excitation system. The absence of analogue signal generation and power amplifiers renders the excitation method attractive for applications with requirements such as a high channel count or low cost per channel. A systematic study of switched linear frequency modulated excitation methods with simulated and laboratory based experimental verification is presented for 2.25 MHz non-destructive testing immersion transducers. The signal to sidelobe noise level of compressed waveforms generated using quinary and bipolar pseudo-chirp excitation are investigated for transmission through a 0.5m water and kaolin slurry channel. Quinary linear frequency modulated excitation consistently reduces signal sidelobe power compared to bipolar excitation methods. Experimental results for transmission between two 2.25 MHz transducers separated by a 0.5m channel of water and 5% kaolin suspension shows improvements in signal to sidelobe noise power in the order of 7-8 dB. The reported quinary switched method for linear frequency modulated excitation provides improved performance compared to pseudo-chirp excitation without the need for high performance excitation amplifiers.

  11. Modeling and simulation of different and representative engineering problems using Network Simulation Method

    PubMed Central

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121

  12. The instantaneous linear motion information measurement method based on inertial sensors for ships

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Huang, Jing; Gao, Chen; Quan, Wei; Li, Ming; Zhang, Yanshun

    2018-05-01

    Ship instantaneous line motion information is the important foundation for ship control, which needs to be measured accurately. For this purpose, an instantaneous line motion measurement method based on inertial sensors is put forward for ships. By introducing a half-fixed coordinate system to realize the separation between instantaneous line motion and ship master movement, the instantaneous line motion acceleration of ships can be obtained with higher accuracy. Then, the digital high-pass filter is applied to suppress the velocity error caused by the low frequency signal such as schuler period. Finally, the instantaneous linear motion displacement of ships can be measured accurately. Simulation experimental results show that the method is reliable and effective, and can realize the precise measurement of velocity and displacement of instantaneous line motion for ships.

  13. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    PubMed

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  14. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    DOE PAGES

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less

  15. Analyzing linear spatial features in ecology.

    PubMed

    Buettel, Jessie C; Cole, Andrew; Dickey, John M; Brook, Barry W

    2018-06-01

    The spatial analysis of dimensionless points (e.g., tree locations on a plot map) is common in ecology, for instance using point-process statistics to detect and compare patterns. However, the treatment of one-dimensional linear features (fiber processes) is rarely attempted. Here we appropriate the methods of vector sums and dot products, used regularly in fields like astrophysics, to analyze a data set of mapped linear features (logs) measured in 12 × 1-ha forest plots. For this demonstrative case study, we ask two deceptively simple questions: do trees tend to fall downhill, and if so, does slope gradient matter? Despite noisy data and many potential confounders, we show clearly that topography (slope direction and steepness) of forest plots does matter to treefall. More generally, these results underscore the value of mathematical methods of physics to problems in the spatial analysis of linear features, and the opportunities that interdisciplinary collaboration provides. This work provides scope for a variety of future ecological analyzes of fiber processes in space. © 2018 by the Ecological Society of America.

  16. Rare-Earth Fourth-Order Multipole Moment in Cubic ErCo2 Probed by Linear Dichroism in Core-Level Photoemission

    NASA Astrophysics Data System (ADS)

    Abozeed, Amina A.; Kadono, Toshiharu; Sekiyama, Akira; Fujiwara, Hidenori; Higashiya, Atsushi; Yamasaki, Atsushi; Kanai, Yuina; Yamagami, Kohei; Tamasaku, Kenji; Yabashi, Makina; Ishikawa, Tetsuya; Andreev, Alexander V.; Wada, Hirofumi; Imada, Shin

    2018-03-01

    We developed a method to experimentally quantify the fourth-order multipole moment of the rare-earth 4f orbital. Linear dichroism (LD) in the Er 3d5/2 core-level photoemission spectra of cubic ErCo2 was measured using bulk-sensitive hard X-ray photoemission spectroscopy. Theoretical calculation reproduced the observed LD, and the result showed that the observed result does not contradict the suggested Γ 83 ground state. Theoretical calculation further showed a linear relationship between the LD size and the size of the fourth-order multipole moment of the Er3+ ion, which is proportional to the expectation value < O40 + 5O44> , where Onm are the Stevens operators. These analyses indicate that the LD in 3d photoemission spectra can be used to quantify the average fourth-order multipole moment of rare-earth atoms in a cubic crystal electric field.

  17. Modified chloride diffusion model for concrete under the coupling effect of mechanical load and chloride salt environment

    NASA Astrophysics Data System (ADS)

    Lei, Mingfeng; Lin, Dayong; Liu, Jianwen; Shi, Chenghua; Ma, Jianjun; Yang, Weichao; Yu, Xiaoniu

    2018-03-01

    For the purpose of investigating lining concrete durability, this study derives a modified chloride diffusion model for concrete based on the odd continuation of boundary conditions and Fourier transform. In order to achieve this, the linear stress distribution on a sectional structure is considered, detailed procedures and methods are presented for model verification and parametric analysis. Simulation results show that the chloride diffusion model can reflect the effects of linear stress distribution of the sectional structure on the chloride diffusivity with reliable accuracy. Along with the natural environmental characteristics of practical engineering structures, reference value ranges of model parameters are provided. Furthermore, a chloride diffusion model is extended for the consideration of multi-factor coupling of linear stress distribution, chloride concentration and diffusion time. Comparison between model simulation and typical current research results shows that the presented model can produce better considerations with a greater universality.

  18. Piezoelectric Power Requirements for Active Vibration Control

    NASA Technical Reports Server (NTRS)

    Brennan, Matthew C.; McGowan, Anna-Maria Rivas

    1997-01-01

    This paper presents a method for predicting the power consumption of piezoelectric actuators utilized for active vibration control. Analytical developments and experimental tests show that the maximum power required to control a structure using surface-bonded piezoelectric actuators is independent of the dynamics between the piezoelectric actuator and the host structure. The results demonstrate that for a perfectly-controlled system, the power consumption is a function of the quantity and type of piezoelectric actuators and the voltage and frequency of the control law output signal. Furthermore, as control effectiveness decreases, the power consumption of the piezoelectric actuators decreases. In addition, experimental results revealed a non-linear behavior in the material properties of piezoelectric actuators. The material non- linearity displayed a significant increase in capacitance with an increase in excitation voltage. Tests show that if the non-linearity of the capacitance was accounted for, a conservative estimate of the power can easily be determined.

  19. Robust consensus control with guaranteed rate of convergence using second-order Hurwitz polynomials

    NASA Astrophysics Data System (ADS)

    Fruhnert, Michael; Corless, Martin

    2017-10-01

    This paper considers homogeneous networks of general, linear time-invariant, second-order systems. We consider linear feedback controllers and require that the directed graph associated with the network contains a spanning tree and systems are stabilisable. We show that consensus with a guaranteed rate of convergence can always be achieved using linear state feedback. To achieve this, we provide a new and simple derivation of the conditions for a second-order polynomial with complex coefficients to be Hurwitz. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. Based on the conditions found, methods to compute feedback gains are proposed. We show that gains can be chosen such that consensus is achieved robustly over a variety of communication structures and system dynamics. We also consider the use of static output feedback.

  20. A polynomial based model for cell fate prediction in human diseases.

    PubMed

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  1. Fourier functional analysis for unsteady aerodynamic modeling

    NASA Technical Reports Server (NTRS)

    Lan, C. Edward; Chin, Suei

    1991-01-01

    A method based on Fourier analysis is developed to analyze the force and moment data obtained in large amplitude forced oscillation tests at high angles of attack. The aerodynamic models for normal force, lift, drag, and pitching moment coefficients are built up from a set of aerodynamic responses to harmonic motions at different frequencies. Based on the aerodynamic models of harmonic data, the indicial responses are formed. The final expressions for the models involve time integrals of the indicial type advocated by Tobak and Schiff. Results from linear two- and three-dimensional unsteady aerodynamic theories as well as test data for a 70-degree delta wing are used to verify the models. It is shown that the present modeling method is accurate in producing the aerodynamic responses to harmonic motions and the ramp type motions. The model also produces correct trend for a 70-degree delta wing in harmonic motion with different mean angles-of-attack. However, the current model cannot be used to extrapolate data to higher angles-of-attack than that of the harmonic motions which form the aerodynamic model. For linear ramp motions, a special method is used to calculate the corresponding frequency and phase angle at a given time. The calculated results from modeling show a higher lift peak for linear ramp motion than for harmonic ramp motion. The current model also shows reasonably good results for the lift responses at different angles of attack.

  2. An optimally weighted estimator of the linear power spectrum disentangling the growth of density perturbations across galaxy surveys

    NASA Astrophysics Data System (ADS)

    Sorini, D.

    2017-04-01

    Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ``light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. An analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ~ 0.80 h Mpc-1 and within 10% up to k ~ 0.94 h Mpc-1, well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.

  3. Projection of angular momentum via linear algebra

    DOE PAGES

    Johnson, Calvin W.; O'Mara, Kevin D.

    2017-12-01

    Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. Here, we show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications tomore » $$^{48}$$Cr and $$^{60}$$Fe in the $pf$ shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.« less

  4. Projection of angular momentum via linear algebra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Calvin W.; O'Mara, Kevin D.

    Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. Here, we show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications tomore » $$^{48}$$Cr and $$^{60}$$Fe in the $pf$ shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.« less

  5. The circular form of the linear superconducting machine for marine propulsion

    NASA Astrophysics Data System (ADS)

    Rakels, J. H.; Mahtani, J. L.; Rhodes, R. G.

    1981-01-01

    The superconducting linear synchronous machine (LSM) is an efficient method of propulsion of advanced ground transport systems and can also be used in marine engineering for the propulsion of large commercial vessels, tankers, and military ships. It provides high torque at low shaft speeds and ease of reversibility; a circular LSM design is proposed as a drive motor. The equipment is compared with the superconducting homopolar motors, showing flexibility in design, built in redundancy features, and reliability.

  6. Matrix-Free Polynomial-Based Nonlinear Least Squares Optimized Preconditioning and its Application to Discontinuous Galerkin Discretizations of the Euler Equations

    DTIC Science & Technology

    2015-06-01

    cient parallel code for applying the operator. Our method constructs a polynomial preconditioner using a nonlinear least squares (NLLS) algorithm. We show...apply the underlying operator. Such a preconditioner can be very attractive in scenarios where one has a highly efficient parallel code for applying...repeatedly solve a large system of linear equations where one has an extremely fast parallel code for applying an underlying fixed linear operator

  7. Extending a Lippmann style seismometer's dynamic range by using a non-linear feedback circuit

    NASA Astrophysics Data System (ADS)

    Romeo, Giovanni; Spinelli, Giuseppe

    2013-04-01

    A Lippmann style seismometer uses a single-coil velocity-feedback method in order to extend toward lower frequencies a geophone's frequency response. Strong seismic signals may saturate the electronics, sometimes producing a characteristic whale-shaped recording. Adding a non linear feedback in the electronic circuit may avoid saturation, allowing the strong-motion use of the seismometer without affecting the usual performance. We show results from both simulations and experiments, using a Teledyne Geotech s13 as a mechanical part.

  8. Time domain convergence properties of Lyapunov stable penalty methods

    NASA Technical Reports Server (NTRS)

    Kurdila, A. J.; Sunkel, John

    1991-01-01

    Linear hyperbolic partial differential equations are analyzed using standard techniques to show that a sequence of solutions generated by the Liapunov stable penalty equations approaches the solution of the differential-algebraic equations governing the dynamics of multibody problems arising in linear vibrations. The analysis does not require that the system be conservative and does not impose any specific integration scheme. Variational statements are derived which bound the error in approximation by the norm of the constraint violation obtained in the approximate solutions.

  9. Projection of angular momentum via linear algebra

    NASA Astrophysics Data System (ADS)

    Johnson, Calvin W.; O'Mara, Kevin D.

    2017-12-01

    Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. We show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications to 48Cr and 60Fe in the p f shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.

  10. An accurate method for solving a class of fractional Sturm-Liouville eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Kashkari, Bothayna S. H.; Syam, Muhammed I.

    2018-06-01

    This article is devoted to both theoretical and numerical study of the eigenvalues of nonsingular fractional second-order Sturm-Liouville problem. In this paper, we implement a fractional-order Legendre Tau method to approximate the eigenvalues. This method transforms the Sturm-Liouville problem to a sparse nonsingular linear system which is solved using the continuation method. Theoretical results for the considered problem are provided and proved. Numerical results are presented to show the efficiency of the proposed method.

  11. Full-Stokes polarimetry with circularly polarized feeds. Sources with stable linear and circular polarization in the GHz regime

    NASA Astrophysics Data System (ADS)

    Myserlis, I.; Angelakis, E.; Kraus, A.; Liontas, C. A.; Marchili, N.; Aller, M. F.; Aller, H. D.; Karamanavis, V.; Fuhrmann, L.; Krichbaum, T. P.; Zensus, J. A.

    2018-01-01

    We present an analysis pipeline that enables the recovery of reliable information for all four Stokes parameters with high accuracy. Its novelty relies on the effective treatment of the instrumental effects even before the computation of the Stokes parameters, contrary to conventionally used methods such as that based on the Müller matrix. For instance, instrumental linear polarization is corrected across the whole telescope beam and significant Stokes Q and U can be recovered even when the recorded signals are severely corrupted by instrumental effects. The accuracy we reach in terms of polarization degree is of the order of 0.1-0.2%. The polarization angles are determined with an accuracy of almost 1°. The presented methodology was applied to recover the linear and circular polarization of around 150 active galactic nuclei, which were monitored between July 2010 and April 2016 with the Effelsberg 100-m telescope at 4.85 GHz and 8.35 GHz with a median cadence of 1.2 months. The polarized emission of the Moon was used to calibrate the polarization angle measurements. Our analysis showed a small system-induced rotation of about 1° at both observing frequencies. Over the examined period, five sources have significant and stable linear polarization; three sources remain constantly linearly unpolarized; and a total of 11 sources have stable circular polarization degree mc, four of them with non-zero mc. We also identify eight sources that maintain a stable polarization angle. All this is provided to the community for future polarization observations reference. We finally show that our analysis method is conceptually different from those traditionally used and performs better than the Müller matrix method. Although it has been developed for a system equipped with circularly polarized feeds, it can easily be generalized to systems with linearly polarized feeds as well. The data used to create Fig. C.1 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/609/A68

  12. Optimal Estimation of Clock Values and Trends from Finite Data

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    2005-01-01

    We show how to solve two problems of optimal linear estimation from a finite set of phase data. Clock noise is modeled as a stochastic process with stationary dth increments. The covariance properties of such a process are contained in the generalized autocovariance function (GACV). We set up two principles for optimal estimation: with the help of the GACV, these principles lead to a set of linear equations for the regression coefficients and some auxiliary parameters. The mean square errors of the estimators are easily calculated. The method can be used to check the results of other methods and to find good suboptimal estimators based on a small subset of the available data.

  13. Parallel computation using boundary elements in solid mechanics

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1990-01-01

    The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

  14. Parallel algorithms for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Amin-Javaheri, Masoud; Orin, David E.

    1989-01-01

    The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.

  15. Deployment-based lifetime optimization for linear wireless sensor networks considering both retransmission and discrete power control.

    PubMed

    Li, Ruiying; Ma, Wenting; Huang, Ning; Kang, Rui

    2017-01-01

    A sophisticated method for node deployment can efficiently reduce the energy consumption of a Wireless Sensor Network (WSN) and prolong the corresponding network lifetime. Pioneers have proposed many node deployment based lifetime optimization methods for WSNs, however, the retransmission mechanism and the discrete power control strategy, which are widely used in practice and have large effect on the network energy consumption, are often neglected and assumed as a continuous one, respectively, in the previous studies. In this paper, both retransmission and discrete power control are considered together, and a more realistic energy-consumption-based network lifetime model for linear WSNs is provided. Using this model, we then propose a generic deployment-based optimization model that maximizes network lifetime under coverage, connectivity and transmission rate success constraints. The more accurate lifetime evaluation conduces to a longer optimal network lifetime in the realistic situation. To illustrate the effectiveness of our method, both one-tiered and two-tiered uniformly and non-uniformly distributed linear WSNs are optimized in our case studies, and the comparisons between our optimal results and those based on relatively inaccurate lifetime evaluation show the advantage of our method when investigating WSN lifetime optimization problems.

  16. Reprint of Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method

    NASA Astrophysics Data System (ADS)

    D'Ambra, Pasqua; Tartaglione, Gaetano

    2015-04-01

    Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.

  17. Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method

    NASA Astrophysics Data System (ADS)

    D'Ambra, Pasqua; Tartaglione, Gaetano

    2015-03-01

    Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.

  18. Brain-state invariant thalamo-cortical coordination revealed by non-linear encoders.

    PubMed

    Viejo, Guillaume; Cortier, Thomas; Peyrache, Adrien

    2018-03-01

    Understanding how neurons cooperate to integrate sensory inputs and guide behavior is a fundamental problem in neuroscience. A large body of methods have been developed to study neuronal firing at the single cell and population levels, generally seeking interpretability as well as predictivity. However, these methods are usually confronted with the lack of ground-truth necessary to validate the approach. Here, using neuronal data from the head-direction (HD) system, we present evidence demonstrating how gradient boosted trees, a non-linear and supervised Machine Learning tool, can learn the relationship between behavioral parameters and neuronal responses with high accuracy by optimizing the information rate. Interestingly, and unlike other classes of Machine Learning methods, the intrinsic structure of the trees can be interpreted in relation to behavior (e.g. to recover the tuning curves) or to study how neurons cooperate with their peers in the network. We show how the method, unlike linear analysis, reveals that the coordination in thalamo-cortical circuits is qualitatively the same during wakefulness and sleep, indicating a brain-state independent feed-forward circuit. Machine Learning tools thus open new avenues for benchmarking model-based characterization of spike trains.

  19. Brain-state invariant thalamo-cortical coordination revealed by non-linear encoders

    PubMed Central

    Cortier, Thomas; Peyrache, Adrien

    2018-01-01

    Understanding how neurons cooperate to integrate sensory inputs and guide behavior is a fundamental problem in neuroscience. A large body of methods have been developed to study neuronal firing at the single cell and population levels, generally seeking interpretability as well as predictivity. However, these methods are usually confronted with the lack of ground-truth necessary to validate the approach. Here, using neuronal data from the head-direction (HD) system, we present evidence demonstrating how gradient boosted trees, a non-linear and supervised Machine Learning tool, can learn the relationship between behavioral parameters and neuronal responses with high accuracy by optimizing the information rate. Interestingly, and unlike other classes of Machine Learning methods, the intrinsic structure of the trees can be interpreted in relation to behavior (e.g. to recover the tuning curves) or to study how neurons cooperate with their peers in the network. We show how the method, unlike linear analysis, reveals that the coordination in thalamo-cortical circuits is qualitatively the same during wakefulness and sleep, indicating a brain-state independent feed-forward circuit. Machine Learning tools thus open new avenues for benchmarking model-based characterization of spike trains. PMID:29565979

  20. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  1. Non-linear HRV indices under autonomic nervous system blockade.

    PubMed

    Bolea, Juan; Pueyo, Esther; Laguna, Pablo; Bailón, Raquel

    2014-01-01

    Heart rate variability (HRV) has been studied as a non-invasive technique to characterize the autonomic nervous system (ANS) regulation of the heart. Non-linear methods based on chaos theory have been used during the last decades as markers for risk stratification. However, interpretation of these nonlinear methods in terms of sympathetic and parasympathetic activity is not fully established. In this work we study linear and non-linear HRV indices during ANS blockades in order to assess their relation with sympathetic and parasympathetic activities. Power spectral content in low frequency (0.04-0.15 Hz) and high frequency (0.15-0.4 Hz) bands of HRV, as well as correlation dimension, sample and approximate entropies were computed in a database of subjects during single and dual ANS blockade with atropine and/or propranolol. Parasympathetic blockade caused a significant decrease in the low and high frequency power of HRV, as well as in correlation dimension and sample and approximate entropies. Sympathetic blockade caused a significant increase in approximate entropy. Sympathetic activation due to postural change from supine to standing caused a significant decrease in all the investigated non-linear indices and a significant increase in the normalized power in the low frequency band. The other investigated linear indices did not show significant changes. Results suggest that parasympathetic activity has a direct relation with sample and approximate entropies.

  2. Linear MALDI-ToF simultaneous spectrum deconvolution and baseline removal.

    PubMed

    Picaud, Vincent; Giovannelli, Jean-Francois; Truntzer, Caroline; Charrier, Jean-Philippe; Giremus, Audrey; Grangeat, Pierre; Mercier, Catherine

    2018-04-05

    Thanks to a reasonable cost and simple sample preparation procedure, linear MALDI-ToF spectrometry is a growing technology for clinical microbiology. With appropriate spectrum databases, this technology can be used for early identification of pathogens in body fluids. However, due to the low resolution of linear MALDI-ToF instruments, robust and accurate peak picking remains a challenging task. In this context we propose a new peak extraction algorithm from raw spectrum. With this method the spectrum baseline and spectrum peaks are processed jointly. The approach relies on an additive model constituted by a smooth baseline part plus a sparse peak list convolved with a known peak shape. The model is then fitted under a Gaussian noise model. The proposed method is well suited to process low resolution spectra with important baseline and unresolved peaks. We developed a new peak deconvolution procedure. The paper describes the method derivation and discusses some of its interpretations. The algorithm is then described in a pseudo-code form where the required optimization procedure is detailed. For synthetic data the method is compared to a more conventional approach. The new method reduces artifacts caused by the usual two-steps procedure, baseline removal then peak extraction. Finally some results on real linear MALDI-ToF spectra are provided. We introduced a new method for peak picking, where peak deconvolution and baseline computation are performed jointly. On simulated data we showed that this global approach performs better than a classical one where baseline and peaks are processed sequentially. A dedicated experiment has been conducted on real spectra. In this study a collection of spectra of spiked proteins were acquired and then analyzed. Better performances of the proposed method, in term of accuracy and reproductibility, have been observed and validated by an extended statistical analysis.

  3. Noninvasive and fast measurement of blood glucose in vivo by near infrared (NIR) spectroscopy

    NASA Astrophysics Data System (ADS)

    Jintao, Xue; Liming, Ye; Yufei, Liu; Chunyan, Li; Han, Chen

    2017-05-01

    This research was to develop a method for noninvasive and fast blood glucose assay in vivo. Near-infrared (NIR) spectroscopy, a more promising technique compared to other methods, was investigated in rats with diabetes and normal rats. Calibration models are generated by two different multivariate strategies: partial least squares (PLS) as linear regression method and artificial neural networks (ANN) as non-linear regression method. The PLS model was optimized individually by considering spectral range, spectral pretreatment methods and number of model factors, while the ANN model was studied individually by selecting spectral pretreatment methods, parameters of network topology, number of hidden neurons, and times of epoch. The results of the validation showed the two models were robust, accurate and repeatable. Compared to the ANN model, the performance of the PLS model was much better, with lower root mean square error of validation (RMSEP) of 0.419 and higher correlation coefficients (R) of 96.22%.

  4. Analysis of Waveform Retracking Methods in Antarctic Ice Sheet Based on CRYOSAT-2 Data

    NASA Astrophysics Data System (ADS)

    Xiao, F.; Li, F.; Zhang, S.; Hao, W.; Yuan, L.; Zhu, T.; Zhang, Y.; Zhu, C.

    2017-09-01

    Satellite altimetry plays an important role in many geoscientific and environmental studies of Antarctic ice sheet. The ranging accuracy is degenerated near coasts or over nonocean surfaces, due to waveform contamination. A postprocess technique, known as waveform retracking, can be used to retrack the corrupt waveform and in turn improve the ranging accuracy. In 2010, the CryoSat-2 satellite was launched with the Synthetic aperture Interferometric Radar ALtimeter (SIRAL) onboard. Satellite altimetry waveform retracking methods are discussed in the paper. Six retracking methods including the OCOG method, the threshold method with 10 %, 25 % and 50 % threshold level, the linear and exponential 5-β parametric methods are used to retrack CryoSat-2 waveform over the transect from Zhongshan Station to Dome A. The results show that the threshold retracker performs best with the consideration of waveform retracking success rate and RMS of retracking distance corrections. The linear 5-β parametric retracker gives best waveform retracking precision, but cannot make full use of the waveform data.

  5. An automatic multigrid method for the solution of sparse linear systems

    NASA Technical Reports Server (NTRS)

    Shapira, Yair; Israeli, Moshe; Sidi, Avram

    1993-01-01

    An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.

  6. [A novel method of multi-channel feature extraction combining multivariate autoregression and multiple-linear principal component analysis].

    PubMed

    Wang, Jinjia; Zhang, Yanna

    2015-02-01

    Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups of IV-III and IV - I. The experimental results proved that the method proposed in this paper was feasible.

  7. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    PubMed

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  8. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-01-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560

  9. Lattice Boltzmann model for simulation of magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Chen, Shiyi; Chen, Hudong; Martinez, Daniel; Matthaeus, William

    1991-01-01

    A numerical method, based on a discrete Boltzmann equation, is presented for solving the equations of magnetohydrodynamics (MHD). The algorithm provides advantages similar to the cellular automaton method in that it is local and easily adapted to parallel computing environments. Because of much lower noise levels and less stringent requirements on lattice size, the method appears to be more competitive with traditional solution methods. Examples show that the model accurately reproduces both linear and nonlinear MHD phenomena.

  10. Quantifying relative importance: Computing standardized effects in models with binary outcomes

    USGS Publications Warehouse

    Grace, James B.; Johnson, Darren; Lefcheck, Jonathan S.; Byrnes, Jarrett E.K.

    2018-01-01

    Results from simulation studies show that both the LT and OE methods of standardization support a similarly-broad range of coefficient comparisons. The LT method estimates effects that reflect underlying latent-linear propensities, while the OE method computes a linear approximation for the effects of predictors on binary responses. The contrast between assumptions for the two methods is reflected in persistently weaker standardized effects associated with OE standardization. Reliance on standard deviations for standardization (the traditional approach) is critically examined and shown to introduce substantial biases when predictors are non-Gaussian. The use of relevant ranges in place of standard deviations has the capacity to place LT and OE standardized coefficients on a more comparable scale. As ecologists address increasingly complex hypotheses, especially those that involve comparing the influences of different controlling factors (e.g., top-down versus bottom-up or biotic versus abiotic controls), comparable coefficients become a necessary component for evaluations.

  11. Inversion of residual stress profiles from ultrasonic Rayleigh wave dispersion data

    NASA Astrophysics Data System (ADS)

    Mora, P.; Spies, M.

    2018-05-01

    We investigate theoretically and with synthetic data the performance of several inversion methods to infer a residual stress state from ultrasonic surface wave dispersion data. We show that this particular problem may reveal in relevant materials undesired behaviors for some methods that could be reliably applied to infer other properties. We focus on two methods, one based on a Taylor-expansion, and another one based on a piecewise linear expansion regularized by a singular value decomposition. We explain the instabilities of the Taylor-based method by highlighting singularities in the series of coefficients. At the same time, we show that the other method can successfully provide performances which only weakly depend on the material.

  12. Numerical Manifold Method for the Forced Vibration of Thin Plates during Bending

    PubMed Central

    Jun, Ding; Song, Chen; Wei-Bin, Wen; Shao-Ming, Luo; Xia, Huang

    2014-01-01

    A novel numerical manifold method was derived from the cubic B-spline basis function. The new interpolation function is characterized by high-order coordination at the boundary of a manifold element. The linear elastic-dynamic equation used to solve the bending vibration of thin plates was derived according to the principle of minimum instantaneous potential energy. The method for the initialization of the dynamic equation and its solution process were provided. Moreover, the analysis showed that the calculated stiffness matrix exhibited favorable performance. Numerical results showed that the generalized degrees of freedom were significantly fewer and that the calculation accuracy was higher for the manifold method than for the conventional finite element method. PMID:24883403

  13. Effect of antimony (Sb) addition on the linear and non-linear optical properties of amorphous Ge-Te-Sb thin films

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Kaur, J.; Tripathi, S. K.; Sharma, I.

    2017-12-01

    Non-crystalline thin films of Ge20Te80-xSbx (x = 0, 2, 4, 6, 10) systems were deposited on glass substrate using thermal evaporation technique. The optical coefficients were accurately determined by transmission spectra using Swanepoel envelope method in the spectral region of 400-1600 nm. The refractive index was found to increase from 2.38 to 2.62 with the corresponding increase in Sb content over the entire spectral range. The dispersion of refractive index was discussed in terms of the single oscillator Wemple-DiDomenico model. Tauc relation for the allowed indirect transition showed decrease in optical band gap. To explore non-linearity, the spectral dependence of third order susceptibility of a-Ge-Te-Sb thin films was evaluated from change of index of refraction using Miller's rule. Susceptibility values were found to enhance rapidly from 10-13 to 10-12 (esu), with the red shift in the absorption edge. Non-linear refractive index was calculated by Fourier and Snitzer formula. The values were of the order of 10-12 esu. At telecommunication wavelength, these non-linear refractive index values showed three orders higher than that of silica glass. Dielectric constant and optical conductivity were also reported. The prepared Sb doped thin films on glass substrate with observed improved functional properties have a noble prospect in the application of nonlinear optical devices and might be used for a high speed communication fiber. Non-linear parameters showed good agreement with the values given in the literature.

  14. On the Convergence of an Implicitly Restarted Arnoldi Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, Richard B.

    We show that Sorensen's [35] implicitly restarted Arnoldi method (including its block extension) is simultaneous iteration with an implicit projection step to accelerate convergence to the invariant subspace of interest. By using the geometric convergence theory for simultaneous iteration due to Watkins and Elsner [43], we prove that an implicitly restarted Arnoldi method can achieve a super-linear rate of convergence to the dominant invariant subspace of a matrix. Moreover, we show how an IRAM computes a nested sequence of approximations for the partial Schur decomposition associated with the dominant invariant subspace of a matrix.

  15. Determination and importance of temperature dependence of retention coefficient (RPHPLC) in QSAR model of nitrazepams' partition coefficient in bile acid micelles.

    PubMed

    Posa, Mihalj; Pilipović, Ana; Lalić, Mladena; Popović, Jovan

    2011-02-15

    Linear dependence between temperature (t) and retention coefficient (k, reversed phase HPLC) of bile acids is obtained. Parameters (a, intercept and b, slope) of the linear function k=f(t) highly correlate with bile acids' structures. Investigated bile acids form linear congeneric groups on a principal component (calculated from k=f(t)) score plot that are in accordance with conformations of the hydroxyl and oxo groups in a bile acid steroid skeleton. Partition coefficient (K(p)) of nitrazepam in bile acids' micelles is investigated. Nitrazepam molecules incorporated in micelles show modified bioavailability (depo effect, higher permeability, etc.). Using multiple linear regression method QSAR models of nitrazepams' partition coefficient, K(p) are derived on the temperatures of 25°C and 37°C. For deriving linear regression models on both temperatures experimentally obtained lipophilicity parameters are included (PC1 from data k=f(t)) and in silico descriptors of the shape of a molecule while on the higher temperature molecular polarisation is introduced. This indicates the fact that the incorporation mechanism of nitrazepam in BA micelles changes on the higher temperatures. QSAR models are derived using partial least squares method as well. Experimental parameters k=f(t) are shown to be significant predictive variables. Both QSAR models are validated using cross validation and internal validation method. PLS models have slightly higher predictive capability than MLR models. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. [A novel quantitative approach to study dynamic anaerobic process at micro scale].

    PubMed

    Zhang, Zhong-Liang; Wu, Jing; Jiang, Jian-Kai; Jiang, Jie; Li, Huai-Zhi

    2012-11-01

    Anaerobic digestion is attracting more and more interests because of its advantages such as low cost and recovery of clean energy etc. In order to overcome the drawbacks of the existed methods to study the dynamic anaerobic process, a novel microscopical quantitative approach at the granule level was developed combining both the microdevice and the quantitative image analysis techniques. This experiment displayed the process and characteristics of the gas production at static state for the first time and the results indicated that the method was of satisfactory repeatability. The gas production process at static state could be divided into three stages including rapid linear increasing stage, decelerated increasing stage and slow linear increasing stage. The rapid linear increasing stage was long and the biogas rate was high under high initial organic loading rate. The results showed that it was feasible to make the anaerobic process to be carried out in the microdevice; furthermore this novel method was reliable and could clearly display the dynamic process of the anaerobic reaction at the micro scale. The results are helpful to understand the anaerobic process.

  17. Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD

    NASA Astrophysics Data System (ADS)

    Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun

    2017-12-01

    This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.

  18. Generalized Gilat-Raubenheimer method for density-of-states calculation in photonic crystals

    NASA Astrophysics Data System (ADS)

    Liu, Boyuan; Johnson, Steven G.; Joannopoulos, John D.; Lu, Ling

    2018-04-01

    An efficient numerical algorithm is the key for accurate evaluation of density of states (DOS) in band theory. The Gilat-Raubenheimer (GR) method proposed in 1966 is an efficient linear extrapolation method which was limited in specific lattices. Here, using an affine transformation, we provide a new generalization of the original GR method to any Bravais lattices and show that it is superior to the tetrahedron method and the adaptive Gaussian broadening method. Finally, we apply our generalized GR method to compute DOS of various gyroid photonic crystals of topological degeneracies.

  19. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  20. Robust and efficient pharmacokinetic parameter non-linear least squares estimation for dynamic contrast enhanced MRI of the prostate.

    PubMed

    Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J

    2018-05-01

    To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. A pocket-sized metabolic analyzer for assessment of resting energy expenditure.

    PubMed

    Zhao, Di; Xian, Xiaojun; Terrera, Mirna; Krishnan, Ranganath; Miller, Dylan; Bridgeman, Devon; Tao, Kevin; Zhang, Lihua; Tsow, Francis; Forzani, Erica S; Tao, Nongjian

    2014-04-01

    The assessment of metabolic parameters related to energy expenditure has a proven value for weight management; however these measurements remain too difficult and costly for monitoring individuals at home. The objective of this study is to evaluate the accuracy of a new pocket-sized metabolic analyzer device for assessing energy expenditure at rest (REE) and during sedentary activities (EE). The new device performs indirect calorimetry by measuring an individual's oxygen consumption (VO2) and carbon dioxide production (VCO2) rates, which allows the determination of resting- and sedentary activity-related energy expenditure. VO2 and VCO2 values of 17 volunteer adult subjects were measured during resting and sedentary activities in order to compare the metabolic analyzer with the Douglas bag method. The Douglas bag method is considered the Gold Standard method for indirect calorimetry. Metabolic parameters of VO2, VCO2, and energy expenditure were compared using linear regression analysis, paired t-tests, and Bland-Altman plots. Linear regression analysis of measured VO2 and VCO2 values, as well as calculated energy expenditure assessed with the new analyzer and Douglas bag method, had the following linear regression parameters (linear regression slope LRS0, and R-squared coefficient, r(2)) with p = 0: LRS0 (SD) = 1.00 (0.01), r(2) = 0.9933 for VO2; LRS0 (SD) = 1.00 (0.01), r(2) = 0.9929 for VCO2; and LRS0 (SD) = 1.00 (0.01), r(2) = 0.9942 for energy expenditure. In addition, results from paired t-tests did not show statistical significant difference between the methods with a significance level of α = 0.05 for VO2, VCO2, REE, and EE. Furthermore, the Bland-Altman plot for REE showed good agreement between methods with 100% of the results within ±2SD, which was equivalent to ≤10% error. The findings demonstrate that the new pocket-sized metabolic analyzer device is accurate for determining VO2, VCO2, and energy expenditure. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  2. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  3. Precision of dehydroascorbic acid quantitation with the use of the subtraction method--validation of HPLC-DAD method for determination of total vitamin C in food.

    PubMed

    Mazurek, Artur; Jamroz, Jerzy

    2015-04-15

    In food analysis, a method for determination of vitamin C should enable measuring of total content of ascorbic acid (AA) and dehydroascorbic acid (DHAA) because both chemical forms exhibit biological activity. The aim of the work was to confirm applicability of HPLC-DAD method for analysis of total content of vitamin C (TC) and ascorbic acid in various types of food by determination of validation parameters such as: selectivity, precision, accuracy, linearity and limits of detection and quantitation. The results showed that the method applied for determination of TC and AA was selective, linear and precise. Precision of DHAA determination by the subtraction method was also evaluated. It was revealed that the results of DHAA determination obtained by the subtraction method were not precise which resulted directly from the assumption of this method and the principles of uncertainty propagation. The proposed chromatographic method should be recommended for routine determinations of total vitamin C in various food. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. A Profilometry-Based Dentifrice Abrasion Method for V8 Brushing Machines Part III: Multi-Laboratory Validation Testing of RDA-PE.

    PubMed

    Schneiderman, Eva; Colón, Ellen L; White, Donald J; Schemehorn, Bruce; Ganovsky, Tara; Haider, Amir; Garcia-Godoy, Franklin; Morrow, Brian R; Srimaneepong, Viritpon; Chumprasert, Sujin

    2017-09-01

    We have previously reported on progress toward the refinement of profilometry-based abrasivity testing of dentifrices using a V8 brushing machine and tactile or optical measurement of dentin wear. The general application of this technique may be advanced by demonstration of successful inter-laboratory confirmation of the method. The objective of this study was to explore the capability of different laboratories in the assessment of dentifrice abrasivity using a profilometry-based evaluation technique developed in our Mason laboratories. In addition, we wanted to assess the interchangeability of human and bovine specimens. Participating laboratories were instructed in methods associated with Radioactive Dentin Abrasivity-Profilometry Equivalent (RDA-PE) evaluation, including site visits to discuss critical elements of specimen preparation, masking, profilometry scanning, and procedures. Laboratories were likewise instructed on the requirement for demonstration of proportional linearity as a key condition for validation of the technique. Laboratories were provided with four test dentifrices, blinded for testing, with a broad range of abrasivity. In each laboratory, a calibration curve was developed for varying V8 brushing strokes (0, 4,000, and 10,000 strokes) with the ISO abrasive standard. Proportional linearity was determined as the ratio of standard abrasion mean depths created with 4,000 and 10,000 strokes (2.5 fold differences). Criteria for successful calibration within the method (established in our Mason laboratory) was set at proportional linearity = 2.5 ± 0.3. RDA-PE was compared to Radiotracer RDA for the four test dentifrices, with the latter obtained by averages from three independent Radiotracer RDA sites. Individual laboratories and their results were compared by 1) proportional linearity and 2) acquired RDA-PE values for test pastes. Five sites participated in the study. One site did not pass proportional linearity objectives. Data for this site are not reported at the request of the researchers. Three of the remaining four sites reported herein tested human dentin and all three met proportional linearity objectives for human dentin. Three of four sites participated in testing bovine dentin and all three met the proportional linearity objectives for bovine dentin. RDA-PE values for test dentifrices were similar between sites. All four sites that met proportional linearity requirement successfully identified the dentifrice formulated above the industry standard 250 RDA (as RDA-PE). The profilometry method showed at least as good reproducibility and differentiation as Radiotracer assessments. It was demonstrated that human and bovine specimens could be used interchangeably. The standardized RDA-PE method was reproduced in multiple laboratories in this inter-laboratory study. Evidence supports that this method is a suitable technique for ISO method 11609 Annex B.

  5. Probability distribution functions for unit hydrographs with optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh

    2017-05-01

    A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.

  6. Missing defects? A comparison of microscopic and macroscopic approaches to identifying linear enamel hypoplasia.

    PubMed

    Hassett, Brenna R

    2014-03-01

    Linear enamel hypoplasia (LEH), the presence of linear defects of dental enamel formed during periods of growth disruption, is frequently analyzed in physical anthropology as evidence for childhood health in the past. However, a wide variety of methods for identifying and interpreting these defects in archaeological remains exists, preventing easy cross-comparison of results from disparate studies. This article compares a standard approach to identifying LEH using the naked eye to the evidence of growth disruption observed microscopically from the enamel surface. This comparison demonstrates that what is interpreted as evidence of growth disruption microscopically is not uniformly identified with the naked eye, and provides a reference for the level of consistency between the number and timing of defects identified using microscopic versus macroscopic approaches. This is done for different tooth types using a large sample of unworn permanent teeth drawn from several post-medieval London burial assemblages. The resulting schematic diagrams showing where macroscopic methods achieve more or less similar results to microscopic methods are presented here and clearly demonstrate that "naked-eye" methods of identifying growth disruptions do not identify LEH as often as microscopic methods in areas where perikymata are more densely packed. Copyright © 2013 Wiley Periodicals, Inc.

  7. Image quality assessment using deep convolutional networks

    NASA Astrophysics Data System (ADS)

    Li, Yezhou; Ye, Xiang; Li, Yong

    2017-12-01

    This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.

  8. Newton's method: A link between continuous and discrete solutions of nonlinear problems

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.

    1980-01-01

    Newton's method for nonlinear mechanics problems replaces the governing nonlinear equations by an iterative sequence of linear equations. When the linear equations are linear differential equations, the equations are usually solved by numerical methods. The iterative sequence in Newton's method can exhibit poor convergence properties when the nonlinear problem has multiple solutions for a fixed set of parameters, unless the iterative sequences are aimed at solving for each solution separately. The theory of the linear differential operators is often a better guide for solution strategies in applying Newton's method than the theory of linear algebra associated with the numerical analogs of the differential operators. In fact, the theory for the differential operators can suggest the choice of numerical linear operators. In this paper the method of variation of parameters from the theory of linear ordinary differential equations is examined in detail in the context of Newton's method to demonstrate how it might be used as a guide for numerical solutions.

  9. Convergence Results on Iteration Algorithms to Linear Systems

    PubMed Central

    Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo

    2014-01-01

    In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640

  10. A comparison of methods to handle skew distributed cost variables in the analysis of the resource consumption in schizophrenia treatment.

    PubMed

    Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C

    2002-03-01

    Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.

  11. Finite-time H∞ control for linear continuous system with norm-bounded disturbance

    NASA Astrophysics Data System (ADS)

    Meng, Qingyi; Shen, Yanjun

    2009-04-01

    In this paper, the definition of finite-time H∞ control is presented. The system under consideration is subject to time-varying norm-bounded exogenous disturbance. The main aim of this paper is focused on the design a state feedback controller which ensures that the closed-loop system is finite-time bounded (FTB) and reduces the effect of the disturbance input on the controlled output to a prescribed level. A sufficient condition is presented for the solvability of this problem, which can be reduced to a feasibility problem involving linear matrix inequalities (LMIs). A detailed solving method is proposed for the restricted linear matrix inequalities. Finally, examples are given to show the validity of the methodology.

  12. Incentives for knowledge sharing: impact of organisational culture and information technology

    NASA Astrophysics Data System (ADS)

    Lyu, Hongbo; Zhang, Zuopeng Justin

    2017-10-01

    This research presents and examines an analytical model of knowledge management in which organisational culture dynamically improves with knowledge-sharing and learning activities within organisations. We investigate the effects of organisational incentives and the level of information technology on the motivation of knowledge sharing. We derive a linear incentive reward structure for knowledge sharing under both homogeneous and heterogeneous conditions. In addition, we show how the organisational culture and the optimum linear sharing reward change with several crucial factors, and summarise three sets of methods (strong IT support, congruent organisational culture, and effective employee assessment) to complement the best linear incentive. Our research provides valuable insights for practitioners in terms of implementing knowledge-management initiatives.

  13. Learning curve of single port laparoscopic cholecystectomy determined using the non-linear ordinary least squares method based on a non-linear regression model: An analysis of 150 consecutive patients.

    PubMed

    Han, Hyung Joon; Choi, Sae Byeol; Park, Man Sik; Lee, Jin Suk; Kim, Wan Bae; Song, Tae Jin; Choi, Sang Yong

    2011-07-01

    Single port laparoscopic surgery has come to the forefront of minimally invasive surgery. For those familiar with conventional techniques, however, this type of operation demands a different type of eye/hand coordination and involves unfamiliar working instruments. Herein, the authors describe the learning curve and the clinical outcomes of single port laparoscopic cholecystectomy for 150 consecutive patients with benign gallbladder disease. All patients underwent single port laparoscopic cholecystectomy using a homemade glove port by one of five operators with different levels of experiences of laparoscopic surgery. The learning curve for each operator was fitted using the non-linear ordinary least squares method based on a non-linear regression model. Mean operating time was 77.6 ± 28.5 min. Fourteen patients (6.0%) were converted to conventional laparoscopic cholecystectomy. Complications occurred in 15 patients (10.0%), as follows: bile duct injury (n = 2), surgical site infection (n = 8), seroma (n = 2), and wound pain (n = 3). One operator achieved a learning curve plateau at 61.4 min per procedure after 8.5 cases and his time improved by 95.3 min as compared with initial operation time. Younger surgeons showed significant decreases in mean operation time and achieved stable mean operation times. In particular, younger surgeons showed significant decreases in operation times after 20 cases. Experienced laparoscopic surgeons can safely perform single port laparoscopic cholecystectomy using conventional or angled laparoscopic instruments. The present study shows that an operator can overcome the single port laparoscopic cholecystectomy learning curve in about eight cases.

  14. High-contrast imaging with an arbitrary aperture: active correction of aperture discontinuities

    NASA Astrophysics Data System (ADS)

    Pueyo, Laurent; Norman, Colin; Soummer, Rémi; Perrin, Marshall; N'Diaye, Mamadou; Choquet, Elodie

    2013-09-01

    We present a new method to achieve high-contrast images using segmented and/or on-axis telescopes. Our approach relies on using two sequential Deformable Mirrors to compensate for the large amplitude excursions in the telescope aperture due to secondary support structures and/or segment gaps. In this configuration the parameter landscape of Deformable Mirror Surfaces that yield high contrast Point Spread Functions is not linear, and non-linear methods are needed to find the true minimum in the optimization topology. We solve the highly non-linear Monge-Ampere equation that is the fundamental equation describing the physics of phase induced amplitude modulation. We determine the optimum configuration for our two sequential Deformable Mirror system and show that high-throughput and high contrast solutions can be achieved using realistic surface deformations that are accessible using existing technologies. We name this process Active Compensation of Aperture Discontinuities (ACAD). We show that for geometries similar to JWST, ACAD can attain at least 10-7 in contrast and an order of magnitude higher for future Extremely Large Telescopes, even when the pupil features a missing segment" . We show that the converging non-linear mappings resulting from our Deformable Mirror shapes actually damp near-field diffraction artifacts in the vicinity of the discontinuities. Thus ACAD actually lowers the chromatic ringing due to diffraction by segment gaps and strut's while not amplifying the diffraction at the aperture edges beyond the Fresnel regime and illustrate the broadband properties of ACAD in the case of the pupil configuration corresponding to the Astrophysics Focused Telescope Assets. Since details about these telescopes are not yet available to the broader astronomical community, our test case is based on a geometry mimicking the actual one, to the best of our knowledge.

  15. Compressible or incompressible blend of interacting monodisperse star and linear polymers near a surface.

    PubMed

    Batman, Richard; Gujrati, P D

    2008-03-28

    We consider a lattice model of a mixture of repulsive, attractive, or neutral monodisperse star (species A) and linear (species B) polymers with a third monomeric species C, which may represent free volume. The mixture is next to a hard, infinite plate whose interactions with A and C can be attractive, repulsive, or neutral. These two interactions are the only parameters necessary to specify the effect of the surface on all three components. We numerically study monomer density profiles using the method of Gujrati and Chhajer that has already been previously applied to study polydisperse and monodisperse linear-linear blends next to surfaces. The resulting density profiles always show an enrichment of linear polymers in the immediate vicinity of the surface due to entropic repulsion of the star core. However, the integrated surface excess of star monomers is sometimes positive, indicating an overall enrichment of stars. This excess increases with the number of star arms only up to a certain critical number and decreases thereafter. The critical arm number increases with compressibility (bulk concentration of C). The method of Gujrati and Chhajer is computationally ultrafast and can be carried out on a personal computer (PC), even in the incompressible case, when simulations are unfeasible. Calculations of density profiles usually take less than 20 min on PCs.

  16. Experimental Study on Rebar Corrosion Using the Galvanic Sensor Combined with the Electronic Resistance Technique

    PubMed Central

    Xu, Yunze; Li, Kaiqiang; Liu, Liang; Yang, Lujia; Wang, Xiaona; Huang, Yi

    2016-01-01

    In this paper, a new kind of carbon steel (CS) and stainless steel (SS) galvanic sensor system was developed for the study of rebar corrosion in different pore solution conditions. Through the special design of the CS and SS electronic coupons, the electronic resistance (ER) method and zero resistance ammeter (ZRA) technique were used simultaneously for the measurement of both the galvanic current and the corrosion depth. The corrosion processes in different solution conditions were also studied by linear polarization resistance (LPR) and the measurements of polarization curves. The test result shows that the galvanic current noise can provide detailed information of the corrosion processes. When localized corrosion occurs, the corrosion rate measured by the ER method is lower than the real corrosion rate. However, the value measured by the LPR method is higher than the real corrosion rate. The galvanic current and the corrosion current measured by the LPR method shows linear correlation in chloride-containing saturated Ca(OH)2 solution. The relationship between the corrosion current differences measured by the CS electronic coupons and the galvanic current between the CS and SS electronic coupons can also be used to evaluate the localized corrosion in reinforced concrete. PMID:27618054

  17. Experimental Study on Rebar Corrosion Using the Galvanic Sensor Combined with the Electronic Resistance Technique.

    PubMed

    Xu, Yunze; Li, Kaiqiang; Liu, Liang; Yang, Lujia; Wang, Xiaona; Huang, Yi

    2016-09-08

    In this paper, a new kind of carbon steel (CS) and stainless steel (SS) galvanic sensor system was developed for the study of rebar corrosion in different pore solution conditions. Through the special design of the CS and SS electronic coupons, the electronic resistance (ER) method and zero resistance ammeter (ZRA) technique were used simultaneously for the measurement of both the galvanic current and the corrosion depth. The corrosion processes in different solution conditions were also studied by linear polarization resistance (LPR) and the measurements of polarization curves. The test result shows that the galvanic current noise can provide detailed information of the corrosion processes. When localized corrosion occurs, the corrosion rate measured by the ER method is lower than the real corrosion rate. However, the value measured by the LPR method is higher than the real corrosion rate. The galvanic current and the corrosion current measured by the LPR method shows linear correlation in chloride-containing saturated Ca(OH)₂ solution. The relationship between the corrosion current differences measured by the CS electronic coupons and the galvanic current between the CS and SS electronic coupons can also be used to evaluate the localized corrosion in reinforced concrete.

  18. Ultrasonic guided wave sensing characteristics of large area thin piezo coating

    NASA Astrophysics Data System (ADS)

    Rathod, V. T.; Jeyaseelan, A. Antony; Dutta, Soma; Mahapatra, D. Roy

    2017-10-01

    This paper reports on the characterization method and performance enhancement of thin piezo coating for ultrasonic guided wave sensing applications. We deposited the coatings by an in situ slurry coating method and studied their guided wave sensing properties on a one-dimensional metallic beam as a substrate waveguide. The developed piezo coatings show good sensitivity to the longitudinal and flexural modes of guided waves. Sensing voltage due to the guided waves at various different ultrasonic frequencies shows a linear dependence on the thickness of the coating. The coatings also exhibit linear sensor output voltage with respect to the induced dynamic strain magnitude. Diameter/size of the piezo coatings strongly influences the voltage response in relation to the wavelength. The proposed method used a characterization set-up involving coated sensors, reference transducers and an analytical model to estimate the piezoelectric coefficient of the piezo coating. The method eliminates the size dependent effect on the piezo property accurately and gives further insight to design better sensors/filters with respect to frequency/wavelength of interest. The developed coatings will have interesting applications in structural health monitoring (SHM) and internet of things (IOT).

  19. Comparative analysis of linear and non-linear method of estimating the sorption isotherm parameters for malachite green onto activated carbon.

    PubMed

    Kumar, K Vasanth

    2006-08-21

    The experimental equilibrium data of malachite green onto activated carbon were fitted to the Freundlich, Langmuir and Redlich-Peterson isotherms by linear and non-linear method. A comparison between linear and non-linear of estimating the isotherm parameters was discussed. The four different linearized form of Langmuir isotherm were also discussed. The results confirmed that the non-linear method as a better way to obtain isotherm parameters. The best fitting isotherm was Langmuir and Redlich-Peterson isotherm. Redlich-Peterson is a special case of Langmuir when the Redlich-Peterson isotherm constant g was unity.

  20. A method for generating reduced-order combustion mechanisms that satisfy the differential entropy inequality

    NASA Astrophysics Data System (ADS)

    Ream, Allen E.; Slattery, John C.; Cizmas, Paul G. A.

    2018-04-01

    This paper presents a new method for determining the Arrhenius parameters of a reduced chemical mechanism such that it satisfies the second law of thermodynamics. The strategy is to approximate the progress of each reaction in the reduced mechanism from the species production rates of a detailed mechanism by using a linear least squares method. A series of non-linear least squares curve fittings are then carried out to find the optimal Arrhenius parameters for each reaction. At this step, the molar rates of production are written such that they comply with a theorem that provides the sufficient conditions for satisfying the second law of thermodynamics. This methodology was used to modify the Arrhenius parameters for the Westbrook and Dryer two-step mechanism and the Peters and Williams three-step mechanism for methane combustion. Both optimized mechanisms showed good agreement with the detailed mechanism for species mole fractions and production rates of most major species. Both optimized mechanisms showed significant improvement over previous mechanisms in minor species production rate prediction. Both optimized mechanisms produced no violations of the second law of thermodynamics.

  1. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  2. Prediction uncertainty and data worth assessment for groundwater transport times in an agricultural catchment

    NASA Astrophysics Data System (ADS)

    Zell, Wesley O.; Culver, Teresa B.; Sanford, Ward E.

    2018-06-01

    Uncertainties about the age of base-flow discharge can have serious implications for the management of degraded environmental systems where subsurface pathways, and the ongoing release of pollutants that accumulated in the subsurface during past decades, dominate the water quality signal. Numerical groundwater models may be used to estimate groundwater return times and base-flow ages and thus predict the time required for stakeholders to see the results of improved agricultural management practices. However, the uncertainty inherent in the relationship between (i) the observations of atmospherically-derived tracers that are required to calibrate such models and (ii) the predictions of system age that the observations inform have not been investigated. For example, few if any studies have assessed the uncertainty of numerically-simulated system ages or evaluated the uncertainty reductions that may result from the expense of collecting additional subsurface tracer data. In this study we combine numerical flow and transport modeling of atmospherically-derived tracers with prediction uncertainty methods to accomplish four objectives. First, we show the relative importance of head, discharge, and tracer information for characterizing response times in a uniquely data rich catchment that includes 266 age-tracer measurements (SF6, CFCs, and 3H) in addition to long term monitoring of water levels and stream discharge. Second, we calculate uncertainty intervals for model-simulated base-flow ages using both linear and non-linear methods, and find that the prediction sensitivity vector used by linear first-order second-moment methods results in much larger uncertainties than non-linear Monte Carlo methods operating on the same parameter uncertainty. Third, by combining prediction uncertainty analysis with multiple models of the system, we show that data-worth calculations and monitoring network design are sensitive to variations in the amount of water leaving the system via stream discharge and irrigation withdrawals. Finally, we demonstrate a novel model-averaged computation of potential data worth that can account for these uncertainties in model structure.

  3. A theory of fine structure image models with an application to detection and classification of dementia.

    PubMed

    O'Neill, William; Penn, Richard; Werner, Michael; Thomas, Justin

    2015-06-01

    Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible.

  4. An MCMC method for the evaluation of the Fisher information matrix for non-linear mixed effect models.

    PubMed

    Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France

    2016-10-01

    Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. A robust two-way semi-linear model for normalization of cDNA microarray data

    PubMed Central

    Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento

    2005-01-01

    Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789

  6. Analysis of graphic representation ability in oscillation phenomena

    NASA Astrophysics Data System (ADS)

    Dewi, A. R. C.; Putra, N. M. D.; Susilo

    2018-03-01

    This study aims to investigates how the ability of students to representation graphs of linear function and harmonic function in understanding of oscillation phenomena. Method of this research used mix methods with concurrent embedded design. The subjects were 35 students of class X MIA 3 SMA 1 Bae Kudus. Data collection through giving essays and interviews that lead to the ability to read and draw graphs in material of Hooke's law and oscillation characteristics. The results of study showed that most of the students had difficulty in drawing graph of linear function and harmonic function of deviation with time. Students’ difficulties in drawing the graph of linear function is the difficulty of analyzing the variable data needed in graph making, confusing the placement of variable data on the coordinate axis, the difficulty of determining the scale interval on each coordinate, and the variation of how to connect the dots forming the graph. Students’ difficulties in representing the graph of harmonic function is to determine the time interval of sine harmonic function, the difficulty to determine the initial deviation point of the drawing, the difficulty of finding the deviation equation of the case of oscillation characteristics and the confusion to different among the maximum deviation (amplitude) with the length of the spring caused the load.Complexity of the characteristic attributes of the oscillation phenomena graphs, students tend to show less well the ability of graphical representation of harmonic functions than the performance of the graphical representation of linear functions.

  7. Bandgap engineering of GaN nanowires

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ming, Bang-Ming; Yan, Hui; Wang, Ru-Zhi, E-mail: wrz@bjut.edu.cn, E-mail: yamcy@csrc.ac.cn

    2016-05-15

    Bandgap engineering has been a powerful technique for manipulating the electronic and optical properties of semiconductors. In this work, a systematic investigation of the electronic properties of [0001] GaN nanowires was carried out using the density functional based tight-binding method (DFTB). We studied the effects of geometric structure and uniaxial strain on the electronic properties of GaN nanowires with diameters ranging from 0.8 to 10 nm. Our results show that the band gap of GaN nanowires depends linearly on both the surface to volume ratio (S/V) and tensile strain. The band gap of GaN nanowires increases linearly with S/V, whilemore » it decreases linearly with increasing tensile strain. These linear relationships provide an effect way in designing GaN nanowires for their applications in novel nano-devices.« less

  8. Enhanced linear photonic nanojet generated by core-shell optical microfibers

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Yang; Yen, Tzu-Ping; Chen, Chien-Wen

    2017-05-01

    The generation of linear photonic nanojet using core-shell optical microfiber is demonstrated numerically and experimentally in the visible light region. The power flow patterns for the core-shell optical microfiber are calculated by using the finite-difference time-domain method. The focusing properties of linear photonic nanojet are evaluated in terms of length and width along propagation and transversal directions. In experiment, the silica optical fiber is etched chemically down to 6 μm diameter and coated with metallic thin film by using glancing angle deposition. We show that the linear photonic nanojet is enhanced clearly by metallic shell due to surface plasmon polaritons. The large-area superresolution imaging can be performed by using a core-shell optical microfiber in the far-field system. The potential applications of this core-shell optical microfiber include micro-fluidics and nano-structure measurements.

  9. Computation of output feedback gains for linear stochastic systems using the Zangnill-Powell Method

    NASA Technical Reports Server (NTRS)

    Kaufman, H.

    1975-01-01

    Because conventional optimal linear regulator theory results in a controller which requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. To this effect a stochastic linear model has been developed that accounts for process parameter and initial uncertainty, measurement noise, and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was then performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.

  10. New Galerkin operational matrices for solving Lane-Emden type equations

    NASA Astrophysics Data System (ADS)

    Abd-Elhameed, W. M.; Doha, E. H.; Saad, A. S.; Bassuony, M. A.

    2016-04-01

    Lane-Emden type equations model many phenomena in mathematical physics and astrophysics, such as thermal explosions. This paper is concerned with introducing third and fourth kind Chebyshev-Galerkin operational matrices in order to solve such problems. The principal idea behind the suggested algorithms is based on converting the linear or nonlinear Lane-Emden problem, through the application of suitable spectral methods, into a system of linear or nonlinear equations in the expansion coefficients, which can be efficiently solved. The main advantage of the proposed algorithm in the linear case is that the resulting linear systems are specially structured, and this of course reduces the computational effort required to solve such systems. As an application, we consider the solar model polytrope with n=3 to show that the suggested solutions in this paper are in good agreement with the numerical results.

  11. Local Intrinsic Dimension Estimation by Generalized Linear Modeling.

    PubMed

    Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru

    2017-07-01

    We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.

  12. Term Cancellations in Computing Floating-Point Gröbner Bases

    NASA Astrophysics Data System (ADS)

    Sasaki, Tateaki; Kako, Fujio

    We discuss the term cancellation which makes the floating-point Gröbner basis computation unstable, and show that error accumulation is never negligible in our previous method. Then, we present a new method, which removes accumulated errors as far as possible by reducing matrices constructed from coefficient vectors by the Gaussian elimination. The method manifests amounts of term cancellations caused by the existence of approximate linearly dependent relations among input polynomials.

  13. Novel inter-crystal scattering event identification method for PET detectors

    NASA Astrophysics Data System (ADS)

    Lee, Min Sun; Kang, Seung Kwan; Lee, Jae Sung

    2018-06-01

    Here, we propose a novel method to identify inter-crystal scattering (ICS) events from a PET detector that is even applicable to light-sharing designs. In the proposed method, the detector observation was considered as a linear problem and ICS events were identified by solving this problem. Two ICS identification methods were suggested for solving the linear problem, pseudoinverse matrix calculation and convex constrained optimization. The proposed method was evaluated based on simulation and experimental studies. For the simulation study, an 8  ×  8 photo sensor was coupled to 8  ×  8, 10  ×  10 and 12  ×  12 crystal arrays to simulate a one-to-one coupling and two light-sharing detectors, respectively. The identification rate, the rate that the identified ICS events correctly include the true first interaction position and the energy linearity were evaluated for the proposed ICS identification methods. For the experimental study, a digital silicon photomultiplier was coupled with 8  ×  8 and 10  ×  10 arrays of 3  ×  3  ×  20 mm3 LGSO crystals to construct the one-to-one coupling and light-sharing detectors, respectively. Intrinsic spatial resolutions were measured for two detector types. The proposed ICS identification methods were implemented, and intrinsic resolutions were compared with and without ICS recovery. As a result, the simulation study showed that the proposed convex optimization method yielded robust energy estimation and high ICS identification rates of 0.93 and 0.87 for the one-to-one and light-sharing detectors, respectively. The experimental study showed a resolution improvement after recovering the identified ICS events into the first interaction position. The average intrinsic spatial resolutions for the one-to-one and light-sharing detector were 1.95 and 2.25 mm in the FWHM without ICS recovery, respectively. These values improved to 1.72 and 1.83 mm after ICS recovery, respectively. In conclusion, our proposed method showed good ICS identification in both one-to-one coupling and light-sharing detectors. We experimentally validated that the ICS recovery based on the proposed identification method led to an improved resolution.

  14. Development and Application of an MSALL-Based Approach for the Quantitative Analysis of Linear Polyethylene Glycols in Rat Plasma by Liquid Chromatography Triple-Quadrupole/Time-of-Flight Mass Spectrometry.

    PubMed

    Zhou, Xiaotong; Meng, Xiangjun; Cheng, Longmei; Su, Chong; Sun, Yantong; Sun, Lingxia; Tang, Zhaohui; Fawcett, John Paul; Yang, Yan; Gu, Jingkai

    2017-05-16

    Polyethylene glycols (PEGs) are synthetic polymers composed of repeating ethylene oxide subunits. They display excellent biocompatibility and are widely used as pharmaceutical excipients. To fully understand the biological fate of PEGs requires accurate and sensitive analytical methods for their quantitation. Application of conventional liquid chromatography-tandem mass spectrometry (LC-MS/MS) is difficult because PEGs have polydisperse molecular weights (MWs) and tend to produce multicharged ions in-source resulting in innumerable precursor ions. As a result, multiple reaction monitoring (MRM) fails to scan all ion pairs so that information on the fate of unselected ions is missed. This Article addresses this problem by application of liquid chromatography-triple-quadrupole/time-of-flight mass spectrometry (LC-Q-TOF MS) based on the MS ALL technique. This technique performs information-independent acquisition by allowing all PEG precursor ions to enter the collision cell (Q2). In-quadrupole collision-induced dissociation (CID) in Q2 then effectively generates several fragments from all PEGs due to the high collision energy (CE). A particular PEG product ion (m/z 133.08592) was found to be common to all linear PEGs and allowed their total quantitation in rat plasma with high sensitivity, excellent linearity and reproducibility. Assay validation showed the method was linear for all linear PEGs over the concentration range 0.05-5.0 μg/mL. The assay was successfully applied to the pharmacokinetic study in rat involving intravenous administration of linear PEG 600, PEG 4000, and PEG 20000. It is anticipated the method will have wide ranging applications and stimulate the development of assays for other pharmaceutical polymers in the future.

  15. Comparison of immunomagnetic separation/adenosine triphosphate rapid method to traditional culture-based method for E. coli and enterococci enumeration in wastewater

    USGS Publications Warehouse

    Bushon, R.N.; Likirdopulos, C.A.; Brady, A.M.G.

    2009-01-01

    Untreated wastewater samples from California, North Carolina, and Ohio were analyzed by the immunomagnetic separation/adenosine triphosphate (IMS/ATP) method and the traditional culture-based method for E. coli and enterococci concentrations. The IMS/ATP method concentrates target bacteria by immunomagnetic separation and then quantifies captured bacteria by measuring bioluminescence induced by release of ATP from the bacterial cells. Results from this method are available within 1 h from the start of sample processing. Significant linear correlations were found between the IMS/ATP results and results from traditional culture-based methods for E. coli and enterococci enumeration for one location in California, two locations in North Carolina, and one location in Ohio (r??values ranged from 0.87 to 0.97). No significant linear relation was found for a second location in California that treats a complex mixture of residential and industrial wastewater. With the exception of one location, IMS/ATP showed promise as a rapid method for the quantification of faecal-indicator organisms in wastewater.

  16. Nonlinear flap-lag axial equations of a rotating beam

    NASA Technical Reports Server (NTRS)

    Kaza, K. R. V.; Kvaternik, R. G.

    1977-01-01

    It is possible to identify essentially four approaches by which analysts have established either the linear or nonlinear governing equations of motion for a particular problem related to the dynamics of rotating elastic bodies. The approaches include the effective applied load artifice in combination with a variational principle and the use of Newton's second law, written as D'Alembert's principle, applied to the deformed configuration. A third approach is a variational method in which nonlinear strain-displacement relations and a first-degree displacement field are used. The method introduced by Vigneron (1975) for deriving the linear flap-lag equations of a rotating beam constitutes the fourth approach. The reported investigation shows that all four approaches make use of the geometric nonlinear theory of elasticity. An alternative method for deriving the nonlinear coupled flap-lag-axial equations of motion is also discussed.

  17. Reaction wheel low-speed compensation using a dither signal

    NASA Astrophysics Data System (ADS)

    Stetson, John B., Jr.

    1993-08-01

    A method for improving low-speed reaction wheel performance on a three-axis controlled spacecraft is presented. The method combines a constant amplitude offset with an unbiased, oscillating dither to harmonically linearize rolling solid friction dynamics. The complete, nonlinear rolling solid friction dynamics using an analytic modification to the experimentally verified Dahl solid friction model were analyzed using the dual-input describing function method to assess the benefits of dither compensation. The modified analytic solid friction model was experimentally verified with a small dc servomotor actuated reaction wheel assembly. Using dither compensation abrupt static friction disturbances are eliminated and near linear behavior through zero rate can be achieved. Simulated vehicle response to a wheel rate reversal shows that when the dither and offset compensation is used, elastic modes are not significantly excited, and the uncompensated attitude error reduces by 34:1.

  18. Imaging resolution and properties analysis of super resolution microscopy with parallel detection under different noise, detector and image restoration conditions

    NASA Astrophysics Data System (ADS)

    Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu

    2018-06-01

    Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.

  19. Method for hue plane preserving color correction.

    PubMed

    Mackiewicz, Michal; Andersen, Casper F; Finlayson, Graham

    2016-11-01

    Hue plane preserving color correction (HPPCC), introduced by Andersen and Hardeberg [Proceedings of the 13th Color and Imaging Conference (CIC) (2005), pp. 141-146], maps device-dependent color values (RGB) to colorimetric color values (XYZ) using a set of linear transforms, realized by white point preserving 3×3 matrices, where each transform is learned and applied in a subregion of color space, defined by two adjacent hue planes. The hue plane delimited subregions of camera RGB values are mapped to corresponding hue plane delimited subregions of estimated colorimetric XYZ values. Hue planes are geometrical half-planes, where each is defined by the neutral axis and a chromatic color in a linear color space. The key advantage of the HPPCC method is that, while offering an estimation accuracy of higher order methods, it maintains the linear colorimetric relations of colors in hue planes. As a significant result, it therefore also renders the colorimetric estimates invariant to exposure and shading of object reflection. In this paper, we present a new flexible and robust version of HPPCC using constrained least squares in the optimization, where the subregions can be chosen freely in number and position in order to optimize the results while constraining transform continuity at the subregion boundaries. The method is compared to a selection of other state-of-the-art characterization methods, and the results show that it outperforms the original HPPCC method.

  20. Validation of a 3D CT method for measurement of linear wear of acetabular cups.

    PubMed

    Jedenmalm, Anneli; Nilsson, Fritjof; Noz, Marilyn E; Green, Douglas D; Gedde, Ulf W; Clarke, Ian C; Stark, Andreas; Maguire, Gerald Q; Zeleznik, Michael P; Olivecrona, Henrik

    2011-02-01

    We evaluated the accuracy and repeatability of a 3D method for polyethylene acetabular cup wear measurements using computed tomography (CT). We propose that the method be used for clinical in vivo assessment of wear in acetabular cups. Ultra-high molecular weight polyethylene cups with a titanium mesh molded on the outside were subjected to wear using a hip simulator. Before and after wear, they were (1) imaged with a CT scanner using a phantom model device, (2) measured using a coordinate measurement machine (CMM), and (3) weighed. CMM was used as the reference method for measurement of femoral head penetration into the cup and for comparison with CT, and gravimetric measurements were used as a reference for both CT and CMM. Femoral head penetration and wear vector angle were studied. The head diameters were also measured with both CMM and CT. The repeatability of the method proposed was evaluated with two repeated measurements using different positions of the phantom in the CT scanner. The accuracy of the 3D CT method for evaluation of linear wear was 0.51 mm and the repeatability was 0.39 mm. Repeatability for wear vector angle was 17°. This study of metal-meshed hip-simulated acetabular cups shows that CT has the capacity for reliable measurement of linear wear of acetabular cups at a clinically relevant level of accuracy.

  1. Order-constrained linear optimization.

    PubMed

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  2. GHM method for obtaining rationalsolutions of nonlinear differential equations.

    PubMed

    Vazquez-Leal, Hector; Sarmiento-Reyes, Arturo

    2015-01-01

    In this paper, we propose the application of the general homotopy method (GHM) to obtain rational solutions of nonlinear differential equations. It delivers a high precision representation of the nonlinear differential equation using a few linear algebraic terms. In order to assess the benefits of this proposal, three nonlinear problems are solved and compared against other semi-analytic methods or numerical methods. The obtained results show that GHM is a powerful tool, capable to generate highly accurate rational solutions. AMS subject classification 34L30.

  3. A method for the analysis of nonlinearities in aircraft dynamic response to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1976-01-01

    An analytical method is developed which combines the equivalent linearization technique for the analysis of the response of nonlinear dynamic systems with the amplitude modulated random process (Press model) for atmospheric turbulence. The method is initially applied to a bilinear spring system. The analysis of the response shows good agreement with exact results obtained by the Fokker-Planck equation. The method is then applied to an example of control-surface displacement limiting in an aircraft with a pitch-hold autopilot.

  4. A purely Lagrangian method for computing linearly-perturbed flows in spherical geometry

    NASA Astrophysics Data System (ADS)

    Jaouen, Stéphane

    2007-07-01

    In many physical applications, one wishes to control the development of multi-dimensional instabilities around a one-dimensional (1D) complex flow. For predicting the growth rates of these perturbations, a general numerical approach is viable which consists in solving simultaneously the one-dimensional equations and their linearized form for three-dimensional perturbations. In Clarisse et al. [J.-M. Clarisse, S. Jaouen, P.-A. Raviart, A Godunov-type method in Lagrangian coordinates for computing linearly-perturbed planar-symmetric flows of gas dynamics, J. Comp. Phys. 198 (2004) 80-105], a class of Godunov-type schemes for planar-symmetric flows of gas dynamics has been proposed. Pursuing this effort, we extend these results to spherically symmetric flows. A new method to derive the Lagrangian perturbation equations, based on the canonical form of systems of conservation laws with zero entropy flux [B. Després, Lagrangian systems of conservation laws. Invariance properties of Lagrangian systems of conservation laws, approximate Riemann solvers and the entropy condition, Numer. Math. 89 (2001) 99-134; B. Després, C. Mazeran, Lagrangian gas dynamics in two dimensions and Lagrangian systems, Arch. Rational Mech. Anal. 178 (2005) 327-372] is also described. It leads to many advantages. First of all, many physical problems we are interested in enter this formalism (gas dynamics, two-temperature plasma equations, ideal magnetohydrodynamics, etc.) whatever is the geometry. Secondly, a class of numerical entropic schemes is available for the basic flow [11]. Last, linearizing and devising numerical schemes for the perturbed flow is straightforward. The numerical capabilities of these methods are illustrated on three test cases of increasing difficulties and we show that - due to its simplicity and its low computational cost - the Linear Perturbations Code (LPC) is a powerful tool to understand and predict the development of hydrodynamic instabilities in the linear regime.

  5. Chosen interval methods for solving linear interval systems with special type of matrix

    NASA Astrophysics Data System (ADS)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  6. Development and Validation of RP-HPLC Method for the Estimation of Ivabradine Hydrochloride in Tablets

    PubMed Central

    Seerapu, Sunitha; Srinivasan, B. P.

    2010-01-01

    A simple, sensitive, precise and robust reverse–phase high-performance liquid chromatographic method for analysis of ivabradine hydrochloride in pharmaceutical formulations was developed and validated as per ICH guidelines. The separation was performed on SS Wakosil C18AR, 250×4.6 mm, 5 μm column with methanol:25 mM phosphate buffer (60:40 v/v), adjusted to pH 6.5 with orthophosphoric acid, added drop wise, as mobile phase. A well defined chromatographic peak of Ivabradine hydrochloride was exhibited with a retention time of 6.55±0.05 min and tailing factor of 1.14 at the flow rate of 0.8 ml/min and at ambient temperature, when monitored at 285 nm. The linear regression analysis data for calibration plots showed good linear relationship with R=0.9998 in the concentration range of 30-210 μg/ml. The method was validated for precision, recovery and robustness. Intra and Inter-day precision (% relative standard deviation) were always less than 2%. The method showed the mean % recovery of 99.00 and 98.55 % for Ivabrad and Inapure tablets, respectively. The proposed method has been successfully applied to the commercial tablets without any interference of excipients. PMID:21695008

  7. Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.

    PubMed

    Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray

    2017-07-11

    Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.

  8. Retrieval of aerosol optical depth from surface solar radiation measurements using machine learning algorithms, non-linear regression and a radiative transfer-based look-up table

    NASA Astrophysics Data System (ADS)

    Huttunen, Jani; Kokkola, Harri; Mielonen, Tero; Esa Juhani Mononen, Mika; Lipponen, Antti; Reunanen, Juha; Vilhelm Lindfors, Anders; Mikkonen, Santtu; Erkki Juhani Lehtinen, Kari; Kouremeti, Natalia; Bais, Alkiviadis; Niska, Harri; Arola, Antti

    2016-07-01

    In order to have a good estimate of the current forcing by anthropogenic aerosols, knowledge on past aerosol levels is needed. Aerosol optical depth (AOD) is a good measure for aerosol loading. However, dedicated measurements of AOD are only available from the 1990s onward. One option to lengthen the AOD time series beyond the 1990s is to retrieve AOD from surface solar radiation (SSR) measurements taken with pyranometers. In this work, we have evaluated several inversion methods designed for this task. We compared a look-up table method based on radiative transfer modelling, a non-linear regression method and four machine learning methods (Gaussian process, neural network, random forest and support vector machine) with AOD observations carried out with a sun photometer at an Aerosol Robotic Network (AERONET) site in Thessaloniki, Greece. Our results show that most of the machine learning methods produce AOD estimates comparable to the look-up table and non-linear regression methods. All of the applied methods produced AOD values that corresponded well to the AERONET observations with the lowest correlation coefficient value being 0.87 for the random forest method. While many of the methods tended to slightly overestimate low AODs and underestimate high AODs, neural network and support vector machine showed overall better correspondence for the whole AOD range. The differences in producing both ends of the AOD range seem to be caused by differences in the aerosol composition. High AODs were in most cases those with high water vapour content which might affect the aerosol single scattering albedo (SSA) through uptake of water into aerosols. Our study indicates that machine learning methods benefit from the fact that they do not constrain the aerosol SSA in the retrieval, whereas the LUT method assumes a constant value for it. This would also mean that machine learning methods could have potential in reproducing AOD from SSR even though SSA would have changed during the observation period.

  9. A new modified conjugate gradient coefficient for solving system of linear equations

    NASA Astrophysics Data System (ADS)

    Hajar, N.; ‘Aini, N.; Shapiee, N.; Abidin, Z. Z.; Khadijah, W.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is an evolution of computational method in solving unconstrained optimization problems. This approach is easy to implement due to its simplicity and has been proven to be effective in solving real-life application. Although this field has received copious amount of attentions in recent years, some of the new approaches of CG algorithm cannot surpass the efficiency of the previous versions. Therefore, in this paper, a new CG coefficient which retains the sufficient descent and global convergence properties of the original CG methods is proposed. This new CG is tested on a set of test functions under exact line search. Its performance is then compared to that of some of the well-known previous CG methods based on number of iterations and CPU time. The results show that the new CG algorithm has the best efficiency amongst all the methods tested. This paper also includes an application of the new CG algorithm for solving large system of linear equations

  10. Quantum mechanical/molecular mechanical/continuum style solvation model: linear response theory, variational treatment, and nuclear gradients.

    PubMed

    Li, Hui

    2009-11-14

    Linear response and variational treatment are formulated for Hartree-Fock (HF) and Kohn-Sham density functional theory (DFT) methods and combined discrete-continuum solvation models that incorporate self-consistently induced dipoles and charges. Due to the variational treatment, analytic nuclear gradients can be evaluated efficiently for these discrete and continuum solvation models. The forces and torques on the induced point dipoles and point charges can be evaluated using simple electrostatic formulas as for permanent point dipoles and point charges, in accordance with the electrostatic nature of these methods. Implementation and tests using the effective fragment potential (EFP, a polarizable force field) method and the conductorlike polarizable continuum model (CPCM) show that the nuclear gradients are as accurate as those in the gas phase HF and DFT methods. Using B3LYP/EFP/CPCM and time-dependent-B3LYP/EFP/CPCM methods, acetone S(0)-->S(1) excitation in aqueous solution is studied. The results are close to those from full B3LYP/CPCM calculations.

  11. [Shock shape representation of sinus heart rate based on cloud model].

    PubMed

    Yin, Wenfeng; Zhao, Jie; Chen, Tiantian; Zhang, Junjian; Zhang, Chunyou; Li, Dapeng; An, Baijing

    2014-04-01

    The present paper is to analyze the trend of sinus heart rate RR interphase sequence after a single ventricular premature beat and to compare it with the two parameters, turbulence onset (TO) and turbulence slope (TS). Based on the acquisition of sinus rhythm concussion sample, we in this paper use a piecewise linearization method to extract its linear characteristics, following which we describe shock form with natural language through cloud model. In the process of acquisition, we use the exponential smoothing method to forecast the position where QRS wave may appear to assist QRS wave detection, and use template to judge whether current cardiac is sinus rhythm. And we choose some signals from MIT-BIH Arrhythmia Database to detect whether the algorithm is effective in Matlab. The results show that our method can correctly detect the changing trend of sinus heart rate. The proposed method can achieve real-time detection of sinus rhythm shocks, which is simple and easily implemented, so that it is effective as a supplementary method.

  12. Nonlinear programming extensions to rational function approximation methods for unsteady aerodynamic forces

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Adams, William M., Jr.

    1988-01-01

    The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.

  13. Spectrophotometric techniques to determine tranexamic acid: Kinetic studies using ninhydrin and direct measuring using ferric chloride

    NASA Astrophysics Data System (ADS)

    Arayne, M. Saeed; Sultana, Najma; Siddiqui, Farhan Ahmed; Mirza, Agha Zeeshan; Zuberi, M. Hashim

    2008-11-01

    Two simple and sensitive spectrophotometric methods in ultraviolet and visible region are described for the determination of tranexamic acid in pure form and pharmaceutical preparations. The first method is based on the reaction of the drug with ninhydrin at boiling temperature and by measuring the increase in absorbance at 575 nm as a function of time. The initial rate, rate constant and fixed time (120 min) procedures were used for constructing the calibration graphs to determine the concentration of the drug, which showed a linear response over the concentration range 16-37 μg mL -1 with correlation coefficient " r" 0.9997, 0.996, 0.9999, LOQ 6.968, 7.138, 2.462 μgmL -1 and LOD 2.090, 2.141 and 0.739 μgmL -1, respectively. In second method tranexamic acid was reacted with ferric chloride solution, yellowish orange colored chromogen showed λ max at 375 nm showing linearity in the concentration range of 50-800 μg mL -1 with correlation coefficient " r" 0.9997, LOQ 6.227 μgmL -1 and LOD 1.868 μgmL -1. The variables affecting the development of the color were optimized and the developed methods were validated statistically and through recovery studies. These results were also verified by IR and NMR spectroscopy. The proposed methods have been successfully applied to the determination of tranexamic acid in commercial pharmaceutical formulation.

  14. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    NASA Astrophysics Data System (ADS)

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  15. On spurious detection of linear response and misuse of the fluctuation-dissipation theorem in finite time series

    NASA Astrophysics Data System (ADS)

    Gottwald, Georg A.; Wormell, J. P.; Wouters, Jeroen

    2016-09-01

    Using a sensitive statistical test we determine whether or not one can detect the breakdown of linear response given observations of deterministic dynamical systems. A goodness-of-fit statistics is developed for a linear statistical model of the observations, based on results for central limit theorems for deterministic dynamical systems, and used to detect linear response breakdown. We apply the method to discrete maps which do not obey linear response and show that the successful detection of breakdown depends on the length of the time series, the magnitude of the perturbation and on the choice of the observable. We find that in order to reliably reject the assumption of linear response for typical observables sufficiently large data sets are needed. Even for simple systems such as the logistic map, one needs of the order of 106 observations to reliably detect the breakdown with a confidence level of 95 %; if less observations are available one may be falsely led to conclude that linear response theory is valid. The amount of data required is larger the smaller the applied perturbation. For judiciously chosen observables the necessary amount of data can be drastically reduced, but requires detailed a priori knowledge about the invariant measure which is typically not available for complex dynamical systems. Furthermore we explore the use of the fluctuation-dissipation theorem (FDT) in cases with limited data length or coarse-graining of observations. The FDT, if applied naively to a system without linear response, is shown to be very sensitive to the details of the sampling method, resulting in erroneous predictions of the response.

  16. A calibration method of infrared LVF based spectroradiometer

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin

    2017-10-01

    In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.

  17. Design of Linear Control System for Wind Turbine Blade Fatigue Testing

    NASA Astrophysics Data System (ADS)

    Toft, Anders; Roe-Poulsen, Bjarke; Christiansen, Rasmus; Knudsen, Torben

    2016-09-01

    This paper proposes a linear method for wind turbine blade fatigue testing at Siemens Wind Power. The setup consists of a blade, an actuator (motor and load mass) that acts on the blade with a sinusoidal moment, and a distribution of strain gauges to measure the blade flexure. Based on the frequency of the sinusoidal input, the blade will start oscillating with a given gain, hence the objective of the fatigue test is to make the blade oscillate with a controlled amplitude. The system currently in use is based on frequency control, which involves some non-linearities that make the system difficult to control. To make a linear controller, a different approach has been chosen, namely making a controller which is not regulating on the input frequency, but on the input amplitude. A non-linear mechanical model for the blade and the motor has been constructed. This model has been simplified based on the desired output, namely the amplitude of the blade. Furthermore, the model has been linearised to make it suitable for linear analysis and control design methods. The controller is designed based on a simplified and linearised model, and its gain parameter determined using pole placement. The model variants have been simulated in the MATLAB toolbox Simulink, which shows that the controller design based on the simple model performs adequately with the non-linear model. Moreover, the developed controller solves the robustness issue found in the existent solution and also reduces the needed energy for actuation as it always operates at the blade eigenfrequency.

  18. Comparison of the occlusal contact area of virtual models and actual models: a comparative in vitro study on Class I and Class II malocclusion models.

    PubMed

    Lee, Hyemin; Cha, Jooly; Chun, Youn-Sic; Kim, Minji

    2018-06-19

    The occlusal registration of virtual models taken by intraoral scanners sometimes shows patterns which seem much different from the patients' occlusion. Therefore, this study aims to evaluate the accuracy of virtual occlusion by comparing virtual occlusal contact area with actual occlusal contact area using a plaster model in vitro. Plaster dental models, 24 sets of Class I models and 20 sets of Class II models, were divided into a Molar, Premolar, and Anterior group. The occlusal contact areas calculated by the Prescale method and the virtual occlusion by scanning method were compared, and the ratio of the molar and incisor area were compared in order to find any particular tendencies. There was no significant difference between the Prescale results and the scanner results in both the molar and premolar groups (p = 0.083 and 0.053, respectively). On the other hand, there was a significant difference between the Prescale and the scanner results in the anterior group with the scanner results presenting overestimation of the occlusal contact points (p < 0.05). In Molars group, the regression analysis shows that the two variables express linear correlation and has a linear equation with a slope of 0.917. R 2 is 0.930. Groups of Premolars and Anteriors had a week linear relationship and greater dispersion. Difference between the actual and virtual occlusion revealed in the anterior portion, where overestimation was observed in the virtual model obtained from the scanning method. Nevertheless, molar and premolar areas showed relatively accurate occlusal contact area in the virtual model.

  19. [Quantitative evaluation of Gd-EOB-DTPA uptake in phantom study for liver MRI].

    PubMed

    Hayashi, Norio; Miyati, Tosiaki; Koda, Wataru; Suzuki, Masayuki; Sanada, Shigeru; Ohno, Naoki; Hamaguchi, Takashi; Matsuura, Yukihiro; Kawahara, Kazuhiro; Yamamoto, Tomoyuki; Matsui, Osamu

    2010-05-20

    Gd-EOB-DTPA is a new liver specific MRI contrast media. In the hepatobiliary phase, contrast media is trapped in normal liver tissue, a normal liver shows high intensity, tumor/liver contrast becomes high, and diagnostic ability improves. In order to indicate the degree of uptake of the contrast media, the enhancement ratio (ER) is calculated. The ER is obtained by calculating (signal intensity (SI) after injection-SI before injection) / SI before injection. However, because there is no linearity between contrast media concentration and SI, ER is not correctly estimated by this method. We discuss a method of measuring ER based on SI and T(1) values using the phantom. We used a column phantom, with an internal diameter of 3 cm, that was filled with Gd-EOB-DTPA diluted solution. Moreover, measurement of the T(1) value by the IR method was also performed. The ER measuring method of this technique consists of the following three components: 1) Measurement of ER based on differences in 1/T(1) values using the variable flip angle (FA) method, 2) Measurement of differences in SI, and 3) Measurement of differences in 1/T(1) values using the IR method. ER values calculated by these three methods were compared. In measurement made using the variable FA method and the IR method, linearity was found between contrast media concentration and ER. On the other hand, linearity was not found between contrast media concentration and SI. For calculation of ER using Gd-EOB-DTPA, a more correct ER is obtained by measuring the T(1) value using the variable FA method.

  20. A comparison of methods for estimating the random effects distribution of a linear mixed model.

    PubMed

    Ghidey, Wendimagegn; Lesaffre, Emmanuel; Verbeke, Geert

    2010-12-01

    This article reviews various recently suggested approaches to estimate the random effects distribution in a linear mixed model, i.e. (1) the smoothing by roughening approach of Shen and Louis,(1) (2) the semi-non-parametric approach of Zhang and Davidian,(2) (3) the heterogeneity model of Verbeke and Lesaffre( 3) and (4) a flexible approach of Ghidey et al. (4) These four approaches are compared via an extensive simulation study. We conclude that for the considered cases, the approach of Ghidey et al. (4) often shows to have the smallest integrated mean squared error for estimating the random effects distribution. An analysis of a longitudinal dental data set illustrates the performance of the methods in a practical example.

  1. Real-time imaging of human brain function by near-infrared spectroscopy using an adaptive general linear model

    PubMed Central

    Abdelnour, A. Farras; Huppert, Theodore

    2009-01-01

    Near-infrared spectroscopy is a non-invasive neuroimaging method which uses light to measure changes in cerebral blood oxygenation associated with brain activity. In this work, we demonstrate the ability to record and analyze images of brain activity in real-time using a 16-channel continuous wave optical NIRS system. We propose a novel real-time analysis framework using an adaptive Kalman filter and a state–space model based on a canonical general linear model of brain activity. We show that our adaptive model has the ability to estimate single-trial brain activity events as we apply this method to track and classify experimental data acquired during an alternating bilateral self-paced finger tapping task. PMID:19457389

  2. Acridine-1, 8-diones - A new class of thermally stable NLOphores: Photophysical, (hyper)polarizability and TD-DFT studies

    NASA Astrophysics Data System (ADS)

    Thorat, Kishor G.; Tayade, Rajratna P.; Sekar, Nagaiyan

    2016-12-01

    Linear and non-linear optical properties of a series of new acridine-1, 8-dione derivatives are investigated in different solvents by using solvatochromic and computational methods. Values of first-order hyperpolarizabilities (βCT or β0) obtained using solvatochromic and computational methods are compared with the reported values for urea and 3-aminoxanthone. The new materials under study show first hyperpolarizability values 2.3 to 5.6 times larger than that of urea and 2 to 15.6 times more than that of 3-aminoxanthone. The dyes possess very high thermal stabilities. The dyes are prepared using one pot multicomponent reaction between dimedone, various aromatic aldehydes and amino acids, and characterized by spectroscopic techniques.

  3. Research on Fault Rate Prediction Method of T/R Component

    NASA Astrophysics Data System (ADS)

    Hou, Xiaodong; Yang, Jiangping; Bi, Zengjun; Zhang, Yu

    2017-07-01

    T/R component is an important part of the large phased array radar antenna array, because of its large numbers, high fault rate, it has important significance for fault prediction. Aiming at the problems of traditional grey model GM(1,1) in practical operation, the discrete grey model is established based on the original model in this paper, and the optimization factor is introduced to optimize the background value, and the linear form of the prediction model is added, the improved discrete grey model of linear regression is proposed, finally, an example is simulated and compared with other models. The results show that the method proposed in this paper has higher accuracy and the solution is simple and the application scope is more extensive.

  4. NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.

    PubMed

    Hinrichs, R N; McLean, S P

    1995-10-01

    This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.

  5. Fault detection for discrete-time LPV systems using interval observers

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-Hui; Yang, Guang-Hong

    2017-10-01

    This paper is concerned with the fault detection (FD) problem for discrete-time linear parameter-varying systems subject to bounded disturbances. A parameter-dependent FD interval observer is designed based on parameter-dependent Lyapunov and slack matrices. The design method is presented by translating the parameter-dependent linear matrix inequalities (LMIs) into finite ones. In contrast to the existing results based on parameter-independent and diagonal Lyapunov matrices, the derived disturbance attenuation, fault sensitivity and nonnegative conditions lead to less conservative LMI characterisations. Furthermore, without the need to design the residual evaluation functions and thresholds, the residual intervals generated by the interval observers are used directly for FD decision. Finally, simulation results are presented for showing the effectiveness and superiority of the proposed method.

  6. Determination of albumin in bronchoalveolar lavage fluid by flow-injection fluorometry using chromazurol S.

    PubMed

    Sato, Takaji; Saito, Yoshihiro; Chikuma, Masahiko; Saito, Yutaka; Nagai, Sonoko

    2008-03-01

    A highly sensitive flow injection fluorometry for the determination of albumin was developed and applied to the determination of albumin in human bronchoalveolar lavage fluids (BALF). This method is based on binding of chromazurol S (CAS) to albumin. The calibration curve was linear in the range of 5-200 microg/ml of albumin. A highly linear correlation (r=0.986) was observed between the albumin level in BALF samples (n=25) determined by the proposed method and by a conventional fluorometric method using CAS (CAS manual method). The IgG interference was lower in the CAS flow injection method than in the CAS manual method. The albumin level in BALF collected from healthy volunteers (n=10) was 58.5+/-13.1 microg/ml. The albumin levels in BALF samples obtained from patients with sarcoidosis and idiopathic pulmonary fibrosis were increased. This finding shows that the determination of albumin levels in BALF samples is useful for investigating lung diseases and that CAS flow injection method is promising in the determination of trace albumin in BALF samples, because it is sensitive and precise.

  7. Robust design optimization using the price of robustness, robust least squares and regularization methods

    NASA Astrophysics Data System (ADS)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  8. Estimation of Gravity Parameters Related to Simple Geometrical Structures by Developing an Approach Based on Deconvolution and Linear Optimization Techniques

    NASA Astrophysics Data System (ADS)

    Asfahani, J.; Tlas, M.

    2015-10-01

    An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.

  9. Sensitivity of control-augmented structure obtained by a system decomposition method

    NASA Technical Reports Server (NTRS)

    Sobieszczanskisobieski, Jaroslaw; Bloebaum, Christina L.; Hajela, Prabhat

    1988-01-01

    The verification of a method for computing sensitivity derivatives of a coupled system is presented. The method deals with a system whose analysis can be partitioned into subsets that correspond to disciplines and/or physical subsystems that exchange input-output data with each other. The method uses the partial sensitivity derivatives of the output with respect to input obtained for each subset separately to assemble a set of linear, simultaneous, algebraic equations that are solved for the derivatives of the coupled system response. This sensitivity analysis is verified using an example of a cantilever beam augmented with an active control system to limit the beam's dynamic displacements under an excitation force. The verification shows good agreement of the method with reference data obtained by a finite difference technique involving entire system analysis. The usefulness of a system sensitivity method in optimization applications by employing a piecewise-linear approach to the same numerical example is demonstrated. The method's principal merits are its intrinsically superior accuracy in comparison with the finite difference technique, and its compatibility with the traditional division of work in complex engineering tasks among specialty groups.

  10. Broad-band simulation of M7.2 earthquake on the North Tehran fault, considering non-linear soil effects

    NASA Astrophysics Data System (ADS)

    Majidinejad, A.; Zafarani, H.; Vahdani, S.

    2018-05-01

    The North Tehran fault (NTF) is known to be one of the most drastic sources of seismic hazard on the city of Tehran. In this study, we provide broad-band (0-10 Hz) ground motions for the city as a consequence of probable M7.2 earthquake on the NTF. Low-frequency motions (0-2 Hz) are provided from spectral element dynamic simulation of 17 scenario models. High-frequency (2-10 Hz) motions are calculated with a physics-based method based on S-to-S backscattering theory. Broad-band ground motions at the bedrock level show amplifications, both at low and high frequencies, due to the existence of deep Tehran basin in the vicinity of the NTF. By employing soil profiles obtained from regional studies, effect of shallow soil layers on broad-band ground motions is investigated by both linear and non-linear analyses. While linear soil response overestimate ground motion prediction equations, non-linear response predicts plausible results within one standard deviation of empirical relationships. Average Peak Ground Accelerations (PGAs) at the northern, central and southern parts of the city are estimated about 0.93, 0.59 and 0.4 g, respectively. Increased damping caused by non-linear soil behaviour, reduces the soil linear responses considerably, in particular at frequencies above 3 Hz. Non-linear deamplification reduces linear spectral accelerations up to 63 per cent at stations above soft thick sediments. By performing more general analyses, which exclude source-to-site effects on stations, a correction function is proposed for typical site classes of Tehran. Parameters for the function which reduces linear soil response in order to take into account non-linear soil deamplification are provided for various frequencies in the range of engineering interest. In addition to fully non-linear analyses, equivalent-linear calculations were also conducted which their comparison revealed appropriateness of the method for large peaks and low frequencies, but its shortage for small to medium peaks and motions with higher than 3 Hz frequencies.

  11. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  12. Applications of Space-Filling-Curves to Cartesian Methods for CFD

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Berger, Marsha J.; Murman, Scott M.

    2003-01-01

    The proposed paper presents a variety novel uses of Space-Filling-Curves (SFCs) for Cartesian mesh methods in 0. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, most are applicable on general body-fitted meshes -both structured and unstructured. We demonstrate the use of single O(N log N) SFC-based reordering to produce single-pass (O(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations. Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 512 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 10% of ideal even with only around 50,000 cells in each subdomain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with O(max(M,N)) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for finite-difference-based gradient design methods.

  13. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    PubMed

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Optimal preview control for a linear continuous-time stochastic control system in finite-time horizon

    NASA Astrophysics Data System (ADS)

    Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi

    2017-01-01

    This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.

  15. Ultraprecision XY stage using a hybrid bolt-clamped Langevin-type ultrasonic linear motor for continuous motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Dong-Jin; Lee, Sun-Kyu, E-mail: skyee@gist.ac.kr

    2015-01-15

    This paper presents a design and control system for an XY stage driven by an ultrasonic linear motor. In this study, a hybrid bolt-clamped Langevin-type ultrasonic linear motor was manufactured and then operated at the resonance frequency of the third longitudinal and the sixth lateral modes. These two modes were matched through the preload adjustment and precisely tuned by the frequency matching method based on the impedance matching method with consideration of the different moving weights. The XY stage was evaluated in terms of position and circular motion. To achieve both fine and stable motion, the controller consisted of amore » nominal characteristics trajectory following (NCTF) control for continuous motion, dead zone compensation, and a switching controller based on the different NCTFs for the macro- and micro-dynamics regimes. The experimental results showed that the developed stage enables positioning and continuous motion with nanometer-level accuracy.« less

  16. A Kernel Embedding-Based Approach for Nonstationary Causal Model Inference.

    PubMed

    Hu, Shoubo; Chen, Zhitang; Chan, Laiwan

    2018-05-01

    Although nonstationary data are more common in the real world, most existing causal discovery methods do not take nonstationarity into consideration. In this letter, we propose a kernel embedding-based approach, ENCI, for nonstationary causal model inference where data are collected from multiple domains with varying distributions. In ENCI, we transform the complicated relation of a cause-effect pair into a linear model of variables of which observations correspond to the kernel embeddings of the cause-and-effect distributions in different domains. In this way, we are able to estimate the causal direction by exploiting the causal asymmetry of the transformed linear model. Furthermore, we extend ENCI to causal graph discovery for multiple variables by transforming the relations among them into a linear nongaussian acyclic model. We show that by exploiting the nonstationarity of distributions, both cause-effect pairs and two kinds of causal graphs are identifiable under mild conditions. Experiments on synthetic and real-world data are conducted to justify the efficacy of ENCI over major existing methods.

  17. Novel permanent magnet linear motor with isolated movers: analytical, numerical and experimental study.

    PubMed

    Yan, Liang; Peng, Juanjuan; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-10-01

    This paper proposes a novel permanent magnet linear motor possessing two movers and one stator. The two movers are isolated and can interact with the stator poles to generate independent forces and motions. Compared with conventional multiple motor driving system, it helps to increase the system compactness, and thus improve the power density and working efficiency. The magnetic field distribution is obtained by using equivalent magnetic circuit method. Following that, the formulation of force output considering armature reaction is carried out. Then inductances are analyzed with finite element method to investigate the relationships of the two movers. It is found that the mutual-inductances are nearly equal to zero, and thus the interaction between the two movers is negligible. A research prototype of the linear motor and a measurement apparatus on thrust force have been developed. Both numerical computation and experiment measurement are conducted to validate the analytical model of thrust force. Comparison shows that the analytical model matches the numerical and experimental results well.

  18. Fitting a Point Cloud to a 3d Polyhedral Surface

    NASA Astrophysics Data System (ADS)

    Popov, E. V.; Rotkov, S. I.

    2017-05-01

    The ability to measure parameters of large-scale objects in a contactless fashion has a tremendous potential in a number of industrial applications. However, this problem is usually associated with an ambiguous task to compare two data sets specified in two different co-ordinate systems. This paper deals with the study of fitting a set of unorganized points to a polyhedral surface. The developed approach uses Principal Component Analysis (PCA) and Stretched grid method (SGM) to substitute a non-linear problem solution with several linear steps. The squared distance (SD) is a general criterion to control the process of convergence of a set of points to a target surface. The described numerical experiment concerns the remote measurement of a large-scale aerial in the form of a frame with a parabolic shape. The experiment shows that the fitting process of a point cloud to a target surface converges in several linear steps. The method is applicable to the geometry remote measurement of large-scale objects in a contactless fashion.

  19. On the Local Convergence of Pattern Search

    NASA Technical Reports Server (NTRS)

    Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

  20. Quantitative sensing of corroded steel rebar embedded in cement mortar specimens using ultrasonic testing

    NASA Astrophysics Data System (ADS)

    Owusu Twumasi, Jones; Le, Viet; Tang, Qixiang; Yu, Tzuyang

    2016-04-01

    Corrosion of steel reinforcing bars (rebars) is the primary cause for the deterioration of reinforced concrete structures. Traditional corrosion monitoring methods such as half-cell potential and linear polarization resistance can only detect the presence of corrosion but cannot quantify it. This study presents an experimental investigation of quantifying degree of corrosion of steel rebar inside cement mortar specimens using ultrasonic testing (UT). A UT device with two 54 kHz transducers was used to measure ultrasonic pulse velocity (UPV) of cement mortar, uncorroded and corroded reinforced cement mortar specimens, utilizing the direct transmission method. The results obtained from the study show that UPV decreases linearly with increase in degree of corrosion and corrosion-induced cracks (surface cracks). With respect to quantifying the degree of corrosion, a model was developed by simultaneously fitting UPV and surface crack width measurements to a two-parameter linear model. The proposed model can be used for predicting the degree of corrosion of steel rebar embedded in cement mortar under similar conditions used in this study up to 3.03%. Furthermore, the modeling approach can be applied to corroded reinforced concrete specimens with additional modification. The findings from this study show that UT has the potential of quantifying the degree of corrosion inside reinforced cement mortar specimens.

  1. The validity of using an electrocutaneous device for pain assessment in patients with cervical radiculopathy.

    PubMed

    Abbott, Allan; Ghasemi-Kafash, Elaheh; Dedering, Åsa

    2014-10-01

    The purpose of this study was to evaluate the validity and preference for assessing pain magnitude with electrocutaneous testing (ECT) compared to the visual analogue scale (VAS) and Borg CR10 scale in men and women with cervical radiculopathy of varying sensory phenotypes. An additional purpose was to investigate ECT sensory and pain thresholds in men and women with cervical radiculopathy of varying sensory phenotypes. This is a cross-sectional study of 34 patients with cervical radiculopathy. Scatterplots and linear regression were used to investigate bivariate relationships between ECT, VAS and Borg CR10 methods of pain magnitude measurement as well as ECT sensory and pain thresholds. The use of the ECT pain magnitude matching paradigm for patients with cervical radiculopathy with normal sensory phenotype shows good linear association with arm pain VAS (R(2) = 0.39), neck pain VAS (R(2) = 0.38), arm pain Borg CR10 scale (R(2) = 0.50) and neck pain Borg CR10 scale (R(2) = 0.49) suggesting acceptable validity of the procedure. For patients with hypoesthesia and hyperesthesia sensory phenotypes, the ECT pain magnitude matching paradigm does not show adequate linear association with rating scale methods rendering the validity of the procedure as doubtful. ECT for sensory and pain threshold investigation, however, provides a method to objectively assess global sensory function in conjunction with sensory receptor specific bedside examination measures.

  2. The preconditioned Gauss-Seidel method faster than the SOR method

    NASA Astrophysics Data System (ADS)

    Niki, Hiroshi; Kohno, Toshiyuki; Morimoto, Munenori

    2008-09-01

    In recent years, a number of preconditioners have been applied to linear systems [A.D. Gunawardena, S.K. Jain, L. Snyder, Modified iterative methods for consistent linear systems, Linear Algebra Appl. 154-156 (1991) 123-143; T. Kohno, H. Kotakemori, H. Niki, M. Usui, Improving modified Gauss-Seidel method for Z-matrices, Linear Algebra Appl. 267 (1997) 113-123; H. Kotakemori, K. Harada, M. Morimoto, H. Niki, A comparison theorem for the iterative method with the preconditioner (I+Smax), J. Comput. Appl. Math. 145 (2002) 373-378; H. Kotakemori, H. Niki, N. Okamoto, Accelerated iteration method for Z-matrices, J. Comput. Appl. Math. 75 (1996) 87-97; M. Usui, H. Niki, T.Kohno, Adaptive Gauss-Seidel method for linear systems, Internat. J. Comput. Math. 51(1994)119-125 [10

  3. An efficient analytical method for determination of S-phenylmercapturic acid in urine by HPLC fluorimetric detector to assessing benzene exposure.

    PubMed

    Mendes, Michele P Rocha; Silveira, Josianne Nicácio; Andre, Leiliane Coelho

    2017-09-15

    Benzene is an important occupational and environmental contaminant, naturally present in petroleum and as by-product in the steel industry. Toxicological studies showed pronounced myelotoxic action, causing leukemic and others blood cells disorders. Assessing of benzene exposure is performed by biomarkers as trans, trans-muconic acid (AttM) and S-phenylmercapturic acid (S-PMA) in urine. Due to specificity of S-PMA, this biomarker has been proposed to asses lower levels of benzene in air. The aim of this study was to validate an analytical method for the quantification of S-PMA by High-Performance Liquid Chromatography with fluorometric detector. The development of an analytical method of S-PMA in urine was carried out by solid phase extraction (SPE) using C-18 phase. The eluated were submitted to water bath at 75°C and nitrogen to analyte concentration, followed by alkaline hydrolysis and derivatization with monobromobimane. The chromatography conditions were reverse phase C-18 column (240mm, 4mm and 5μm) at 35°C; acetonitrile and 0.5% acetic acid (50:50) as mobile phase with a flow of 0.8mL/min. The limits of detection and quantification were 0.22μg/L and 0.68μg/L, respectively. The linearity was verified by simple linear regression, and the method exhibited good linearity in the range of 10-100μg/L. There was no matrix effect for S-PMA using concentrations of 40, 60, 80 and 100μg/L. The intra- and interassay precision showed coefficient of variation of less than 10% and the recovery ranged from 83.4 to 102.8% with an average of 94.4%. The stability of S-PMA in urine stored at -20°C was of seven weeks. The conclusion is that this method presents satisfactory results per their figures of merit. This proposed method for determining urinary S-PMA showed adequate sensitivity for assessment of occupational and environmental exposure to benzene using S-PMA as biomarker of exposure. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Linear nicking endonuclease-mediated strand-displacement DNA amplification.

    PubMed

    Joneja, Aric; Huang, Xiaohua

    2011-07-01

    We describe a method for linear isothermal DNA amplification using nicking endonuclease-mediated strand displacement by a DNA polymerase. The nicking of one strand of a DNA target by the endonuclease produces a primer for the polymerase to initiate synthesis. As the polymerization proceeds, the downstream strand is displaced into a single-stranded form while the nicking site is also regenerated. The combined continuous repetitive action of nicking by the endonuclease and strand-displacement synthesis by the polymerase results in linear amplification of one strand of the DNA molecule. We demonstrate that DNA templates up to 5000 nucleotides can be linearly amplified using a nicking endonuclease with 7-bp recognition sequence and Sequenase version 2.0 in the presence of single-stranded DNA binding proteins. We also show that a mixture of three templates of 500, 1000, and 5000 nucleotides in length is linearly amplified with the original molar ratios of the templates preserved. Moreover, we demonstrate that a complex library of hydrodynamically sheared genomic DNA from bacteriophage lambda can be amplified linearly. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Linear nicking endonuclease-mediated strand displacement DNA amplification

    PubMed Central

    Joneja, Aric; Huang, Xiaohua

    2011-01-01

    We describe a method for linear isothermal DNA amplification using nicking endonuclease-mediated strand displacement by a DNA polymerase. The nicking of one strand of a DNA target by the endonuclease produces a primer for the polymerase to initiate synthesis. As the polymerization proceeds, the downstream strand is displaced into a single-stranded form while the nicking site is also regenerated. The combined continuous repetitive action of nicking by the endonuclease and strand displacement synthesis by the polymerase results in linear amplification of one strand of the DNA molecule. We demonstrate that DNA templates up to five thousand nucleotides can be linearly amplified using a nicking endonuclease with seven base-pair recognition sequence and Sequenase version 2.0 in the presence of single-stranded DNA binding proteins. We also show that a mixture of three templates of 500, 1000, and 5000 nucleotides in length are linearly amplified with the original molar ratios of the templates preserved. Moreover, we demonstrate that a complex library of hydrodynamically sheared genomic DNA from bacteriophage lambda can be amplified linearly. PMID:21342654

  6. Incompressible boundary-layer stability analysis of LFC experimental data for sub-critical Mach numbers. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Berry, S. A.

    1986-01-01

    An incompressible boundary-layer stability analysis of Laminar Flow Control (LFC) experimental data was completed and the results are presented. This analysis was undertaken for three reasons: to study laminar boundary-layer stability on a modern swept LFC airfoil; to calculate incompressible design limits of linear stability theory as applied to a modern airfoil at high subsonic speeds; and to verify the use of linear stability theory as a design tool. The experimental data were taken from the slotted LFC experiment recently completed in the NASA Langley 8-Foot Transonic Pressure Tunnel. Linear stability theory was applied and the results were compared with transition data to arrive at correlated n-factors. Results of the analysis showed that for the configuration and cases studied, Tollmien-Schlichting (TS) amplification was the dominating disturbance influencing transition. For these cases, incompressible linear stability theory correlated with an n-factor for TS waves of approximately 10 at transition. The n-factor method correlated rather consistently to this value despite a number of non-ideal conditions which indicates the method is useful as a design tool for advanced laminar flow airfoils.

  7. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  8. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses.

    PubMed

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  9. An evaluation of bias in propensity score-adjusted non-linear regression models.

    PubMed

    Wan, Fei; Mitra, Nandita

    2018-03-01

    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  10. Study on the characters of control valve for ammonia injection in selective catalytic reduction (SCR) system of coal-fired power plant

    NASA Astrophysics Data System (ADS)

    Yao, Che; Li, Tao; Zhang, Hong; Zhou, Yanming

    2017-08-01

    In this paper, the characters of two control valves used for ammonia injection in SCR system are discussed. The linear/quadratic character between pressure drop/outlet flow rate and valve opening/dynamic pressure inlet are investigated using computational fluid dynamic (CFD) and response surface analysis (RSA) methods. The results show that the linear character of brake valve is significantly better than butterfly valve, which means that the brake valve is more suitable for ammonia injection adjustment than the butterfly valve.

  11. Partner symmetries and non-invariant solutions of four-dimensional heavenly equations

    NASA Astrophysics Data System (ADS)

    Malykh, A. A.; Nutku, Y.; Sheftel, M. B.

    2004-07-01

    We extend our method of partner symmetries to the hyperbolic complex Monge-Ampère equation and the second heavenly equation of Plebañski. We show the existence of partner symmetries and derive the relations between them. For certain simple choices of partner symmetries the resulting differential constraints together with the original heavenly equations are transformed to systems of linear equations by an appropriate Legendre transformation. The solutions of these linear equations are generically non-invariant. As a consequence we obtain explicitly new classes of heavenly metrics without Killing vectors.

  12. Some comments on Anderson and Pospahala's correction of bias in line transect sampling

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Chain, B.R.

    1980-01-01

    ANDERSON and POSPAHALA (1970) investigated the estimation of wildlife population size using the belt or line transect sampling method and devised a correction for bias, thus leading to an estimator with interesting characteristics. This work was given a uniform mathematical framework in BURNHAM and ANDERSON (1976). In this paper we show that the ANDERSON-POSPAHALA estimator is optimal in the sense of being the (unique) best linear unbiased estimator within the class of estimators which are linear combinations of cell frequencies, provided certain assumptions are met.

  13. Probabilistic and Possibilistic Analyses of the Strength of a Bonded Joint

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Krishnamurthy, T.; Smith, Steven A.

    2001-01-01

    The effects of uncertainties on the strength of a single lap shear joint are explained. Probabilistic and possibilistic methods are used to account for uncertainties. Linear and geometrically nonlinear finite element analyses are used in the studies. To evaluate the strength of the joint, fracture in the adhesive and material strength failure in the strap are considered. The study shows that linear analyses yield conservative predictions for failure loads. The possibilistic approach for treating uncertainties appears to be viable for preliminary design, but with several qualifications.

  14. A fast linearized conservative finite element method for the strongly coupled nonlinear fractional Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Li, Meng; Gu, Xian-Ming; Huang, Chengming; Fei, Mingfa; Zhang, Guoyu

    2018-04-01

    In this paper, a fast linearized conservative finite element method is studied for solving the strongly coupled nonlinear fractional Schrödinger equations. We prove that the scheme preserves both the mass and energy, which are defined by virtue of some recursion relationships. Using the Sobolev inequalities and then employing the mathematical induction, the discrete scheme is proved to be unconditionally convergent in the sense of L2-norm and H α / 2-norm, which means that there are no any constraints on the grid ratios. Then, the prior bound of the discrete solution in L2-norm and L∞-norm are also obtained. Moreover, we propose an iterative algorithm, by which the coefficient matrix is independent of the time level, and thus it leads to Toeplitz-like linear systems that can be efficiently solved by Krylov subspace solvers with circulant preconditioners. This method can reduce the memory requirement of the proposed linearized finite element scheme from O (M2) to O (M) and the computational complexity from O (M3) to O (Mlog ⁡ M) in each iterative step, where M is the number of grid nodes. Finally, numerical results are carried out to verify the correction of the theoretical analysis, simulate the collision of two solitary waves, and show the utility of the fast numerical solution techniques.

  15. An optimally weighted estimator of the linear power spectrum disentangling the growth of density perturbations across galaxy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorini, D., E-mail: sorini@mpia-hd.mpg.de

    2017-04-01

    Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ''light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. Anmore » analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ∼ 0.80 h Mpc{sup −1} and within 10% up to k ∼ 0.94 h Mpc{sup −1}, well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.« less

  16. Weibull Modulus Estimated by the Non-linear Least Squares Method: A Solution to Deviation Occurring in Traditional Weibull Estimation

    NASA Astrophysics Data System (ADS)

    Li, T.; Griffiths, W. D.; Chen, J.

    2017-11-01

    The Maximum Likelihood method and the Linear Least Squares (LLS) method have been widely used to estimate Weibull parameters for reliability of brittle and metal materials. In the last 30 years, many researchers focused on the bias of Weibull modulus estimation, and some improvements have been achieved, especially in the case of the LLS method. However, there is a shortcoming in these methods for a specific type of data, where the lower tail deviates dramatically from the well-known linear fit in a classic LLS Weibull analysis. This deviation can be commonly found from the measured properties of materials, and previous applications of the LLS method on this kind of dataset present an unreliable linear regression. This deviation was previously thought to be due to physical flaws ( i.e., defects) contained in materials. However, this paper demonstrates that this deviation can also be caused by the linear transformation of the Weibull function, occurring in the traditional LLS method. Accordingly, it may not be appropriate to carry out a Weibull analysis according to the linearized Weibull function, and the Non-linear Least Squares method (Non-LS) is instead recommended for the Weibull modulus estimation of casting properties.

  17. Non-linear principal component analysis applied to Lorenz models and to North Atlantic SLP

    NASA Astrophysics Data System (ADS)

    Russo, A.; Trigo, R. M.

    2003-04-01

    A non-linear generalisation of Principal Component Analysis (PCA), denoted Non-Linear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of three data sets. Non-Linear Principal Component Analysis allows for the detection and characterisation of low-dimensional non-linear structure in multivariate data sets. This method is implemented using a 5-layer feed-forward neural network introduced originally in the chemical engineering literature (Kramer, 1991). The method is described and details of its implementation are addressed. Non-Linear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor (1963). It is found that the NLPCA approximations are more representative of the data than are the corresponding PCA approximations. The same methodology was applied to the less known Lorenz attractor (1984). However, the results obtained weren't as good as those attained with the famous 'Butterfly' attractor. Further work with this model is underway in order to assess if NLPCA techniques can be more representative of the data characteristics than are the corresponding PCA approximations. The application of NLPCA to relatively 'simple' dynamical systems, such as those proposed by Lorenz, is well understood. However, the application of NLPCA to a large climatic data set is much more challenging. Here, we have applied NLPCA to the sea level pressure (SLP) field for the entire North Atlantic area and the results show a slight imcrement of explained variance associated. Finally, directions for future work are presented.%}

  18. The Use of Sparse Direct Solver in Vector Finite Element Modeling for Calculating Two Dimensional (2-D) Magnetotelluric Responses in Transverse Electric (TE) Mode

    NASA Astrophysics Data System (ADS)

    Yihaa Roodhiyah, Lisa’; Tjong, Tiffany; Nurhasan; Sutarno, D.

    2018-04-01

    The late research, linear matrices of vector finite element in two dimensional(2-D) magnetotelluric (MT) responses modeling was solved by non-sparse direct solver in TE mode. Nevertheless, there is some weakness which have to be improved especially accuracy in the low frequency (10-3 Hz-10-5 Hz) which is not achieved yet and high cost computation in dense mesh. In this work, the solver which is used is sparse direct solver instead of non-sparse direct solverto overcome the weaknesses of solving linear matrices of vector finite element metod using non-sparse direct solver. Sparse direct solver will be advantageous in solving linear matrices of vector finite element method because of the matrix properties which is symmetrical and sparse. The validation of sparse direct solver in solving linear matrices of vector finite element has been done for a homogen half-space model and vertical contact model by analytical solution. Thevalidation result of sparse direct solver in solving linear matrices of vector finite element shows that sparse direct solver is more stable than non-sparse direct solver in computing linear problem of vector finite element method especially in low frequency. In the end, the accuracy of 2D MT responses modelling in low frequency (10-3 Hz-10-5 Hz) has been reached out under the efficient allocation memory of array and less computational time consuming.

  19. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  20. Three-dimensional inversion of multisource array electromagnetic data

    NASA Astrophysics Data System (ADS)

    Tartaras, Efthimios

    Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.

  1. A fast method to compute Three-Dimensional Infrared Radiative Transfer in non scattering medium

    NASA Astrophysics Data System (ADS)

    Makke, Laurent; Musson-Genon, Luc; Carissimo, Bertrand

    2014-05-01

    The Atmospheric Radiation field has seen the development of more accurate and faster methods to take into account absoprtion in participating media. Radiative fog appears with clear sky condition due to a significant cooling during the night, so scattering is left out. Fog formation modelling requires accurate enough method to compute cooling rates. Thanks to High Performance Computing, multi-spectral approach of Radiative Transfer Equation resolution is most often used. Nevertheless, the coupling of three-dimensionnal radiative transfer with fluid dynamics is very detrimental to the computational cost. To reduce the time spent in radiation calculations, the following method uses analytical absorption functions fitted by Sasamori (1968) on Yamamoto's charts (Yamamoto,1956) to compute a local linear absorption coefficient. By averaging radiative properties, this method eliminates the spectral integration. For an isothermal atmosphere, analytical calculations lead to an explicit formula between emissivities functions and linear absorption coefficient. In the case of cooling to space approximation, this analytical expression gives very accurate results compared to correlated k-distribution. For non homogeneous paths, we propose a two steps algorithm. One-dimensional radiative quantities and linear absorption coefficient are computed by a two-flux method. Then, three-dimensional RTE under the grey medium assumption is solved with the DOM. Comparisons with measurements of radiative quantities during ParisFOG field (2006) shows the cability of this method to handle strong vertical variations of pressure/temperature and gases concentrations.

  2. Validation of a 3D CT method for measurement of linear wear of acetabular cups

    PubMed Central

    2011-01-01

    Background We evaluated the accuracy and repeatability of a 3D method for polyethylene acetabular cup wear measurements using computed tomography (CT). We propose that the method be used for clinical in vivo assessment of wear in acetabular cups. Material and methods Ultra-high molecular weight polyethylene cups with a titanium mesh molded on the outside were subjected to wear using a hip simulator. Before and after wear, they were (1) imaged with a CT scanner using a phantom model device, (2) measured using a coordinate measurement machine (CMM), and (3) weighed. CMM was used as the reference method for measurement of femoral head penetration into the cup and for comparison with CT, and gravimetric measurements were used as a reference for both CT and CMM. Femoral head penetration and wear vector angle were studied. The head diameters were also measured with both CMM and CT. The repeatability of the method proposed was evaluated with two repeated measurements using different positions of the phantom in the CT scanner. Results The accuracy of the 3D CT method for evaluation of linear wear was 0.51 mm and the repeatability was 0.39 mm. Repeatability for wear vector angle was 17°. Interpretation This study of metal-meshed hip-simulated acetabular cups shows that CT has the capacity for reliable measurement of linear wear of acetabular cups at a clinically relevant level of accuracy. PMID:21281259

  3. A Multiphysics Finite Element and Peridynamics Model of Dielectric Breakdown

    DTIC Science & Technology

    2017-09-01

    A method for simulating dielectric breakdown in solid materials is presented that couples electro-quasi-statics, the adiabatic heat equation, and...temperatures or high strains. The Kelvin force computation used in the method is verified against a 1-D solution and the linearization scheme used to treat the...plane problems, a 2-D composite capacitor with a conductive flaw, and a 3-D point–plane problem. The results show that the method is capable of

  4. Communication: Analysing kinetic transition networks for rare events.

    PubMed

    Stevenson, Jacob D; Wales, David J

    2014-07-28

    The graph transformation approach is a recently proposed method for computing mean first passage times, rates, and committor probabilities for kinetic transition networks. Here we compare the performance to existing linear algebra methods, focusing on large, sparse networks. We show that graph transformation provides a much more robust framework, succeeding when numerical precision issues cause the other methods to fail completely. These are precisely the situations that correspond to rare event dynamics for which the graph transformation was introduced.

  5. Functional Techniques for Data Analysis

    NASA Technical Reports Server (NTRS)

    Tomlinson, John R.

    1997-01-01

    This dissertation develops a new general method of solving Prony's problem. Two special cases of this new method have been developed previously. They are the Matrix Pencil and the Osculatory Interpolation. The dissertation shows that they are instances of a more general solution type which allows a wide ranging class of linear functional to be used in the solution of the problem. This class provides a continuum of functionals which provide new methods that can be used to solve Prony's problem.

  6. Modeling vibration response and damping of cables and cabled structures

    NASA Astrophysics Data System (ADS)

    Spak, Kaitlin S.; Agnes, Gregory S.; Inman, Daniel J.

    2015-02-01

    In an effort to model the vibration response of cabled structures, the distributed transfer function method is developed to model cables and a simple cabled structure. The model includes shear effects, tension, and hysteretic damping for modeling of helical stranded cables, and includes a method for modeling cable attachment points using both linear and rotational damping and stiffness. The damped cable model shows agreement with experimental data for four types of stranded cables, and the damped cabled beam model shows agreement with experimental data for the cables attached to a beam structure, as well as improvement over the distributed mass method for cabled structure modeling.

  7. Biotransformation of lignan glycoside to its aglycone by Woodfordia fruticosa flowers: quantification of compounds using a validated HPTLC method.

    PubMed

    Mishra, Shikha; Aeri, Vidhu

    2017-12-01

    Saraca asoca Linn. (Caesalpiniaceae) is an important traditional remedy for gynaecological disorders and it contains lyoniside, an aryl tetralin lignan glycoside. The aglycone of lyoniside, lyoniresinol possesses structural similarity to enterolignan precursors which are established phytoestrogens. This work illustrates biotransformation of lyoniside to lyoniresinol using Woodfordia fruticosa Kurz. (Lythraceae) flowers and simultaneous quantification of lyoniside and lyoniresinol using a validated HPTLC method. The aqueous extract prepared from S. asoca bark was fermented using W. fruticosa flowers. The substrate and fermented product both were simultaneously analyzed using solvent system:toluene:ethyl acetate:formic acid (4:3:0.4) at 254 nm. The method was validated for specificity, accuracy, precision, linearity, sensitivity and robustness as per ICH guidelines. The substrate showed the presence of lyoniside, however, it decreased as the fermentation proceeded. On 3rd day, lyoniresinol starts appearing in the medium. In 8 days duration most of the lyoniside converted to lyoniresinol. The developed method was specific for lyoniside and lyoniresinol. Lyoniside and lyoniresinol showed linearity in the range of 250-3000 and 500-2500 ng. The method was accurate as resulted in 99.84% and 99.83% recovery, respectively, for lyoniside and lyoniresinol. Aryl tetralin lignan glycoside, lyoniside was successfully transformed into lyoniresinol using W. fruticosa flowers and their contents were simultaneously analyzed using developed validated HPTLC method.

  8. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  9. A spectral measurement method for determining white OLED average junction temperatures

    NASA Astrophysics Data System (ADS)

    Zhu, Yiting; Narendran, Nadarajah

    2016-09-01

    The objective of this study was to investigate an indirect method of measuring the average junction temperature of a white organic light-emitting diode (OLED) based on temperature sensitivity differences in the radiant power emitted by individual emitter materials (i.e., "blue," "green," and "red"). The measured spectral power distributions (SPDs) of the white OLED as a function of temperature showed amplitude decrease as a function of temperature in the different spectral bands, red, green, and blue. Analyzed data showed a good linear correlation between the integrated radiance for each spectral band and the OLED panel temperature, measured at a reference point on the back surface of the panel. The integrated radiance ratio of the spectral band green compared to red, (G/R), correlates linearly with panel temperature. Assuming that the panel reference point temperature is proportional to the average junction temperature of the OLED panel, the G/R ratio can be used for estimating the average junction temperature of an OLED panel.

  10. Detection of coffee flavour ageing by solid-phase microextraction/surface acoustic wave sensor array technique (SPME/SAW).

    PubMed

    Barié, Nicole; Bücking, Mark; Stahl, Ullrich; Rapp, Michael

    2015-06-01

    The use of polymer coated surface acoustic wave (SAW) sensor arrays is a very promising technique for highly sensitive and selective detection of volatile organic compounds (VOCs). We present new developments to achieve a low cost sensor setup with a sampling method enabling the highly reproducible detection of volatiles even in the ppb range. Since the VOCs of coffee are well known by gas chromatography (GC) research studies, the new sensor array was tested for an easy assessable objective: coffee ageing during storage. As reference method these changes were traced with a standard GC/FID set-up, accompanied by sensory panellists. The evaluation of GC data showed a non-linear characteristic for single compound concentrations as well as for total peak area values, disabling prediction of the coffee age. In contrast, the new SAW sensor array demonstrates a linear dependency, i.e. being capable to show a dependency between volatile concentration and storage time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures

    NASA Astrophysics Data System (ADS)

    Gault, Travis R.; Jansen, Melissa E.; DeCoster, Mallory E.; Jansing, E. David; Rodriguez, Benjamin M.

    2016-05-01

    Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor's field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory's Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the output component signatures is calculated using the spectral angle mapper. Results show that iterative methods significantly outperform the traditional methods under the given test conditions.

  12. The use of experimental design in the development of an HPLC-ECD method for the analysis of captopril.

    PubMed

    Khamanga, Sandile M; Walker, Roderick B

    2011-01-15

    An accurate, sensitive and specific high performance liquid chromatography-electrochemical detection (HPLC-ECD) method that was developed and validated for captopril (CPT) is presented. Separation was achieved using a Phenomenex(®) Luna 5 μm (C(18)) column and a mobile phase comprised of phosphate buffer (adjusted to pH 3.0): acetonitrile in a ratio of 70:30 (v/v). Detection was accomplished using a full scan multi channel ESA Coulometric detector in the "oxidative-screen" mode with the upstream electrode (E(1)) set at +600 mV and the downstream (analytical) electrode (E(2)) set at +950 mV, while the potential of the guard cell was maintained at +1050 mV. The detector gain was set at 300. Experimental design using central composite design (CCD) was used to facilitate method development. Mobile phase pH, molarity and concentration of acetonitrile (ACN) were considered the critical factors to be studied to establish the retention time of CPT and cyclizine (CYC) that was used as the internal standard. Twenty experiments including centre points were undertaken and a quadratic model was derived for the retention time for CPT using the experimental data. The method was validated for linearity, accuracy, precision, limits of quantitation and detection, as per the ICH guidelines. The system was found to produce sharp and well-resolved peaks for CPT and CYC with retention times of 3.08 and 7.56 min, respectively. Linear regression analysis for the calibration curve showed a good linear relationship with a regression coefficient of 0.978 in the concentration range of 2-70 μg/mL. The linear regression equation was y=0.0131x+0.0275. The limits of detection (LOQ) and quantitation (LOD) were found to be 2.27 and 0.6 μg/mL, respectively. The method was used to analyze CPT in tablets. The wide range for linearity, accuracy, sensitivity, short retention time and composition of the mobile phase indicated that this method is better for the quantification of CPT than the pharmacopoeial methods. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Linear Mechanisms and Pressure Fluctuations in Wall Turbulence

    NASA Astrophysics Data System (ADS)

    Septham, Kamthon; Morrison, Jonathan

    2014-11-01

    Full-domain, linear feedback control of turbulent channel flow at Reτ <= 400 via vU' at low wavenumbers is an effective method to attenuate turbulent channel flow such that it is relaminarised. The passivity-based control approach is adopted and explained by the conservative characteristics of the nonlinear terms contributing to the Reynolds-Orr equation (Sharma et al .Phys .Fluids 2011). The linear forcing acts on the wall-normal velocity field and thus the pressure field via the linear (rapid) source term of the Poisson equation for pressure fluctuations, 2U'∂v/∂x . The minimum required spanwise wavelength resolution without losing control is constant at λz+ = 125, based on the wall friction velocity at t = 0 . The result shows that the maximum forcing is located at y+ ~ 20 , corresponding to the location of the maximum in the mean-square pressure gradient. The effectiveness of linear control is qualitatively explained by Landahl's theory for timescales, in that the control proceeds via the shear interaction timescale which is much shorter than both the nonlinear and viscous timescales. The response of the rapid (linear) and slow (nonlinear) pressure fluctuations to the linear control is examined and discussed.

  14. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  15. Kinetic water-bag model of global collisional drift waves and ion temperature gradient instabilities in cylindrical geometry

    NASA Astrophysics Data System (ADS)

    Gravier, E.; Plaut, E.

    2013-04-01

    Collisional drift waves and ion temperature gradient (ITG) instabilities are studied using a linear water-bag kinetic model [P. Morel et al., Phys. Plasmas 14, 112109 (2007)]. An efficient spectral method, already validated in the case of drift waves instabilities [E. Gravier et al., Eur. Phys. J. D 67, 7 (2013)], allows a fast solving of the global linear problem in cylindrical geometry. The comparison between the linear ITG instability properties thus computed and the ones given by the COLUMBIA experiment [R. G. Greaves et al., Plasma Phys. Controlled Fusion 34, 1253 (1992)] shows a qualitative agreement. Moreover, the transition between collisional drift waves and ITG instabilities is studied theoretically as a function of the ion temperature profile.

  16. Nonlinear robust control of hypersonic aircrafts with interactions between flight dynamics and propulsion systems.

    PubMed

    Li, Zhaoying; Zhou, Wenjie; Liu, Hao

    2016-09-01

    This paper addresses the nonlinear robust tracking controller design problem for hypersonic vehicles. This problem is challenging due to strong coupling between the aerodynamics and the propulsion system, and the uncertainties involved in the vehicle dynamics including parametric uncertainties, unmodeled model uncertainties, and external disturbances. By utilizing the feedback linearization technique, a linear tracking error system is established with prescribed references. For the linear model, a robust controller is proposed based on the signal compensation theory to guarantee that the tracking error dynamics is robustly stable. Numerical simulation results are given to show the advantages of the proposed nonlinear robust control method, compared to the robust loop-shaping control approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Life cycle cost optimization of biofuel supply chains under uncertainties based on interval linear programming.

    PubMed

    Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun

    2015-01-01

    The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. The relative degree enhancement problem for MIMO nonlinear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoenwald, D.A.; Oezguener, Ue.

    1995-07-01

    The authors present a result for linearizing a nonlinear MIMO system by employing partial feedback - feedback at all but one input-output channel such that the SISO feedback linearization problem is solvable at the remaining input-output channel. The partial feedback effectively enhances the relative degree at the open input-output channel provided the feedback functions are chosen to satisfy relative degree requirements. The method is useful for nonlinear systems that are not feedback linearizable in a MIMO sense. Several examples are presented to show how these feedback functions can be computed. This strategy can be combined with decentralized observers for amore » completely decentralized feedback linearization result for at least one input-output channel.« less

  19. A comparison of linear and non-linear data assimilation methods using the NEMO ocean model

    NASA Astrophysics Data System (ADS)

    Kirchgessner, Paul; Tödter, Julian; Nerger, Lars

    2015-04-01

    The assimilation behavior of the widely used LETKF is compared with the Equivalent Weight Particle Filter (EWPF) in a data assimilation application with an idealized configuration of the NEMO ocean model. The experiments show how the different filter methods behave when they are applied to a realistic ocean test case. The LETKF is an ensemble-based Kalman filter, which assumes Gaussian error distributions and hence implicitly requires model linearity. In contrast, the EWPF is a fully nonlinear data assimilation method that does not rely on a particular error distribution. The EWPF has been demonstrated to work well in highly nonlinear situations, like in a model solving a barotropic vorticity equation, but it is still unknown how the assimilation performance compares to ensemble Kalman filters in realistic situations. For the experiments, twin assimilation experiments with a square basin configuration of the NEMO model are performed. The configuration simulates a double gyre, which exhibits significant nonlinearity. The LETKF and EWPF are both implemented in PDAF (Parallel Data Assimilation Framework, http://pdaf.awi.de), which ensures identical experimental conditions for both filters. To account for the nonlinearity, the assimilation skill of the two methods is assessed by using different statistical metrics, like CRPS and Histograms.

  20. On the equivalence of case-crossover and time series methods in environmental epidemiology.

    PubMed

    Lu, Yun; Zeger, Scott L

    2007-04-01

    The case-crossover design was introduced in epidemiology 15 years ago as a method for studying the effects of a risk factor on a health event using only cases. The idea is to compare a case's exposure immediately prior to or during the case-defining event with that same person's exposure at otherwise similar "reference" times. An alternative approach to the analysis of daily exposure and case-only data is time series analysis. Here, log-linear regression models express the expected total number of events on each day as a function of the exposure level and potential confounding variables. In time series analyses of air pollution, smooth functions of time and weather are the main confounders. Time series and case-crossover methods are often viewed as competing methods. In this paper, we show that case-crossover using conditional logistic regression is a special case of time series analysis when there is a common exposure such as in air pollution studies. This equivalence provides computational convenience for case-crossover analyses and a better understanding of time series models. Time series log-linear regression accounts for overdispersion of the Poisson variance, while case-crossover analyses typically do not. This equivalence also permits model checking for case-crossover data using standard log-linear model diagnostics.

Top