Wickering, Ellis; Gaspard, Nicolas; Zafar, Sahar; Moura, Valdery J; Biswal, Siddharth; Bechek, Sophia; OʼConnor, Kathryn; Rosenthal, Eric S; Westover, M Brandon
2016-06-01
The purpose of this study is to evaluate automated implementations of continuous EEG monitoring-based detection of delayed cerebral ischemia based on methods used in classical retrospective studies. We studied 95 patients with either Fisher 3 or Hunt Hess 4 to 5 aneurysmal subarachnoid hemorrhage who were admitted to the Neurosciences ICU and underwent continuous EEG monitoring. We implemented several variations of two classical algorithms for automated detection of delayed cerebral ischemia based on decreases in alpha-delta ratio and relative alpha variability. Of 95 patients, 43 (45%) developed delayed cerebral ischemia. Our automated implementation of the classical alpha-delta ratio-based trending method resulted in a sensitivity and specificity (Se,Sp) of (80,27)%, compared with the values of (100,76)% reported in the classic study using similar methods in a nonautomated fashion. Our automated implementation of the classical relative alpha variability-based trending method yielded (Se,Sp) values of (65,43)%, compared with (100,46)% reported in the classic study using nonautomated analysis. Our findings suggest that improved methods to detect decreases in alpha-delta ratio and relative alpha variability are needed before an automated EEG-based early delayed cerebral ischemia detection system is ready for clinical use.
Estimating Tree Height-Diameter Models with the Bayesian Method
Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733
Estimating tree height-diameter models with the Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.
NASA Astrophysics Data System (ADS)
Austin, Rickey W.
In Einstein's theory of Special Relativity (SR), one method to derive relativistic kinetic energy is via applying the classical work-energy theorem to relativistic momentum. This approach starts with a classical based work-energy theorem and applies SR's momentum to the derivation. One outcome of this derivation is relativistic kinetic energy. From this derivation, it is rather straight forward to form a kinetic energy based time dilation function. In the derivation of General Relativity a common approach is to bypass classical laws as a starting point. Instead a rigorous development of differential geometry and Riemannian space is constructed, from which classical based laws are derived. This is in contrast to SR's approach of starting with classical laws and applying the consequences of the universal speed of light by all observers. A possible method to derive time dilation due to Newtonian gravitational potential energy (NGPE) is to apply SR's approach to deriving relativistic kinetic energy. It will be shown this method gives a first order accuracy compared to Schwarzschild's metric. The SR's kinetic energy and the newly derived NGPE derivation are combined to form a Riemannian metric based on these two energies. A geodesic is derived and calculations compared to Schwarzschild's geodesic for an orbiting test mass about a central, non-rotating, non-charged massive body. The new metric results in high accuracy calculations when compared to Einsteins General Relativity's prediction. The new method provides a candidate approach for starting with classical laws and deriving General Relativity effects. This approach mimics SR's method of starting with classical mechanics when deriving relativistic equations. As a compliment to introducing General Relativity, it provides a plausible scaffolding method from classical physics when teaching introductory General Relativity. A straight forward path from classical laws to General Relativity will be derived. This derivation provides a minimum first order accuracy to Schwarzschild's solution to Einstein's field equations.
Classical versus Computer Algebra Methods in Elementary Geometry
ERIC Educational Resources Information Center
Pech, Pavel
2005-01-01
Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…
Comparison of adaptive critic-based and classical wide-area controllers for power systems.
Ray, Swakshar; Venayagamoorthy, Ganesh Kumar; Chaudhuri, Balarko; Majumder, Rajat
2008-08-01
An adaptive critic design (ACD)-based damping controller is developed for a thyristor-controlled series capacitor (TCSC) installed in a power system with multiple poorly damped interarea modes. The performance of this ACD computational intelligence-based method is compared with two classical techniques, which are observer-based state-feedback (SF) control and linear matrix inequality LMI-H(infinity) robust control. Remote measurements are used as feedback signals to the wide-area damping controller for modulating the compensation of the TCSC. The classical methods use a linearized model of the system whereas the ACD method is purely measurement-based, leading to a nonlinear controller with fixed parameters. A comparative analysis of the controllers' performances is carried out under different disturbance scenarios. The ACD-based design has shown promising performance with very little knowledge of the system compared to classical model-based controllers. This paper also discusses the advantages and disadvantages of ACDs, SF, and LMI-H(infinity).
A strategy for quantum algorithm design assisted by machine learning
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung
2014-07-01
We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.
ERIC Educational Resources Information Center
Zhong, Zhenshan; Sun, Mengyao
2018-01-01
The power of general education curriculum comes from the enduring classics. The authors apply research methods such as questionnaire survey, interview, and observation to investigate the state of general education curriculum implementation at N University and analyze problems faced by incorporating classics. Based on this, the authors propose that…
Petruševska, Marija; Urleb, Uroš; Peternel, Luka
2013-11-01
The excipient-mediated precipitation inhibition is classically determined by the quantification of the dissolved compound in the solution. In this study, two alternative approaches were evaluated, one is the light scattering (nephelometer) and other is the turbidity (plate reader) microtiter plate-based methods which are based on the quantification of the compound precipitate. Following the optimization of the nephelometer settings (beam focus, laser gain) and the experimental conditions, the screening of 23 excipients on the precipitation inhibition of poorly soluble fenofibrate and dipyridamole was performed. The light scattering method resulted in excellent correlation (r>0.91) between the calculated precipitation inhibitor parameters (PIPs) and the precipitation inhibition index (PI(classical)) obtained by the classical approach for fenofibrate and dipyridamole. Among the evaluated PIPs AUC100 (nephelometer) resulted in only four false positives and lack of false negatives. In the case of the turbidity-based method a good correlation of the PI(classical) was obtained for the PIP maximal optical density (OD(max), r=0.91), however, only for fenofibrate. In the case of the OD(max) (plate reader) five false positives and two false negatives were identified. In conclusion, the light scattering-based method outperformed the turbidity-based one and could be reliably used for identification of novel precipitation inhibitors. Copyright © 2013 Elsevier B.V. All rights reserved.
Thermodynamic integration from classical to quantum mechanics.
Habershon, Scott; Manolopoulos, David E
2011-12-14
We present a new method for calculating quantum mechanical corrections to classical free energies, based on thermodynamic integration from classical to quantum mechanics. In contrast to previous methods, our method is numerically stable even in the presence of strong quantum delocalization. We first illustrate the method and its relationship to a well-established method with an analysis of a one-dimensional harmonic oscillator. We then show that our method can be used to calculate the quantum mechanical contributions to the free energies of ice and water for a flexible water model, a problem for which the established method is unstable. © 2011 American Institute of Physics
Extraction of decision rules via imprecise probabilities
NASA Astrophysics Data System (ADS)
Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.
2017-05-01
Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.
Simulation of wave packet tunneling of interacting identical particles
NASA Astrophysics Data System (ADS)
Lozovik, Yu. E.; Filinov, A. V.; Arkhipov, A. S.
2003-02-01
We demonstrate a different method of simulation of nonstationary quantum processes, considering the tunneling of two interacting identical particles, represented by wave packets. The used method of quantum molecular dynamics (WMD) is based on the Wigner representation of quantum mechanics. In the context of this method ensembles of classical trajectories are used to solve quantum Wigner-Liouville equation. These classical trajectories obey Hamiltonian-like equations, where the effective potential consists of the usual classical term and the quantum term, which depends on the Wigner function and its derivatives. The quantum term is calculated using local distribution of trajectories in phase space, therefore, classical trajectories are not independent, contrary to classical molecular dynamics. The developed WMD method takes into account the influence of exchange and interaction between particles. The role of direct and exchange interactions in tunneling is analyzed. The tunneling times for interacting particles are calculated.
NASA Astrophysics Data System (ADS)
Ignatyev, A. V.; Ignatyev, V. A.; Onischenko, E. V.
2017-11-01
This article is the continuation of the work made bt the authors on the development of the algorithms that implement the finite element method in the form of a classical mixed method for the analysis of geometrically nonlinear bar systems [1-3]. The paper describes an improved algorithm of the formation of the nonlinear governing equations system for flexible plane frames and bars with large displacements of nodes based on the finite element method in a mixed classical form and the use of the procedure of step-by-step loading. An example of the analysis is given.
Wang, Hongkai; Zhou, Zongwei; Li, Yingci; Chen, Zhonghua; Lu, Peiou; Wang, Wenzhi; Liu, Wanyu; Yu, Lijuan
2017-12-01
This study aimed to compare one state-of-the-art deep learning method and four classical machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer (NSCLC) from 18 F-FDG PET/CT images. Another objective was to compare the discriminative power of the recently popular PET/CT texture features with the widely used diagnostic features such as tumor size, CT value, SUV, image contrast, and intensity standard deviation. The four classical machine learning methods included random forests, support vector machines, adaptive boosting, and artificial neural network. The deep learning method was the convolutional neural networks (CNN). The five methods were evaluated using 1397 lymph nodes collected from PET/CT images of 168 patients, with corresponding pathology analysis results as gold standard. The comparison was conducted using 10 times 10-fold cross-validation based on the criterion of sensitivity, specificity, accuracy (ACC), and area under the ROC curve (AUC). For each classical method, different input features were compared to select the optimal feature set. Based on the optimal feature set, the classical methods were compared with CNN, as well as with human doctors from our institute. For the classical methods, the diagnostic features resulted in 81~85% ACC and 0.87~0.92 AUC, which were significantly higher than the results of texture features. CNN's sensitivity, specificity, ACC, and AUC were 84, 88, 86, and 0.91, respectively. There was no significant difference between the results of CNN and the best classical method. The sensitivity, specificity, and ACC of human doctors were 73, 90, and 82, respectively. All the five machine learning methods had higher sensitivities but lower specificities than human doctors. The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research.
Metal Ion Modeling Using Classical Mechanics
2017-01-01
Metal ions play significant roles in numerous fields including chemistry, geochemistry, biochemistry, and materials science. With computational tools increasingly becoming important in chemical research, methods have emerged to effectively face the challenge of modeling metal ions in the gas, aqueous, and solid phases. Herein, we review both quantum and classical modeling strategies for metal ion-containing systems that have been developed over the past few decades. This Review focuses on classical metal ion modeling based on unpolarized models (including the nonbonded, bonded, cationic dummy atom, and combined models), polarizable models (e.g., the fluctuating charge, Drude oscillator, and the induced dipole models), the angular overlap model, and valence bond-based models. Quantum mechanical studies of metal ion-containing systems at the semiempirical, ab initio, and density functional levels of theory are reviewed as well with a particular focus on how these methods inform classical modeling efforts. Finally, conclusions and future prospects and directions are offered that will further enhance the classical modeling of metal ion-containing systems. PMID:28045509
Off-diagonal expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
H. T. Schreuder; M. S. Williams
2000-01-01
In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...
An Experimental and Theoretical Study of Nitrogen-Broadened Acetylene Lines
NASA Technical Reports Server (NTRS)
Thibault, Franck; Martinez, Raul Z.; Bermejo, Dionisio; Ivanov, Sergey V.; Buzykin, Oleg G.; Ma, Qiancheng
2014-01-01
We present experimental nitrogen-broadening coefficients derived from Voigt profiles of isotropic Raman Q-lines measured in the 2 band of acetylene (C2H2) at 150 K and 298 K, and compare them to theoretical values obtained through calculations that were carried out specifically for this work. Namely, full classical calculations based on Gordon's approach, two kinds of semi-classical calculations based on Robert Bonamy method as well as full quantum dynamical calculations were performed. All the computations employed exactly the same ab initio potential energy surface for the C2H2N2 system which is, to our knowledge, the most realistic, accurate and up-to-date one. The resulting calculated collisional half-widths are in good agreement with the experimental ones only for the full classical and quantum dynamical methods. In addition, we have performed similar calculations for IR absorption lines and compared the results to bibliographic values. Results obtained with the full classical method are again in good agreement with the available room temperature experimental data. The quantum dynamical close-coupling calculations are too time consuming to provide a complete set of values and therefore have been performed only for the R(0) line of C2H2. The broadening coefficient obtained for this line at 173 K and 297 K also compares quite well with the available experimental data. The traditional Robert Bonamy semi-classical formalism, however, strongly overestimates the values of half-width for both Qand R-lines. The refined semi-classical Robert Bonamy method, first proposed for the calculations of pressure broadening coefficients of isotropic Raman lines, is also used for IR lines. By using this improved model that takes into account effects from line coupling, the calculated semi-classical widths are significantly reduced and closer to the measured ones.
Introducing Hurst exponent in pair trading
NASA Astrophysics Data System (ADS)
Ramos-Requena, J. P.; Trinidad-Segovia, J. E.; Sánchez-Granero, M. A.
2017-12-01
In this paper we introduce a new methodology for pair trading. This new method is based on the calculation of the Hurst exponent of a pair. Our approach is inspired by the classical concepts of co-integration and mean reversion but joined under a unique strategy. We will show how Hurst approach presents better results than classical Distance Method and Correlation strategies in different scenarios. Results obtained prove that this new methodology is consistent and suitable by reducing the drawdown of trading over the classical ones getting as a result a better performance.
ERIC Educational Resources Information Center
Huddleston, Gregory H.
1993-01-01
Describes one teacher's methods for introducing to secondary English students the concepts of Classicism and Romanticism in relation to pictures of gardens, architecture, music, and literary works. Outlines how the unit leads to a writing assignment based on collected responses over time. (HB)
Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses.
Olivari, Mario; Nieuwenhuizen, Frank M; Venrooij, Joost; Bülthoff, Heinrich H; Pollini, Lorenzo
2015-12-01
In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails.
A new procedure for calculating contact stresses in gear teeth
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.
1991-01-01
A numerical procedure for evaluating and monitoring contact stresses in meshing gear teeth is discussed. The procedure is intended to extend the range of applicability and to improve the accuracy of gear contact stress analysis. The procedure is based upon fundamental solution from the theory of elasticity. It is an iterative numerical procedure. The method is believed to have distinct advantages over the classical Hertz method, the finite-element method, and over existing approaches with the boundary element method. Unlike many classical contact stress analyses, friction effects and sliding are included. Slipping and sticking in the contact region are studied. Several examples are discussed. The results are in agreement with classical results. Applications are presented for spur gears.
Contact stresses in gear teeth: A new method of analysis
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.; Oswald, Fred B.
1991-01-01
A new, innovative procedure called point load superposition for determining the contact stresses in mating gear teeth. It is believed that this procedure will greatly extend both the range of applicability and the accuracy of gear contact stress analysis. Point load superposition is based upon fundamental solutions from the theory of elasticity. It is an iterative numerical procedure which has distinct advantages over the classical Hertz method, the finite element method, and over existing applications with the boundary element method. Specifically, friction and sliding effects, which are either excluded from or difficult to study with the classical methods, are routinely handled with the new procedure. Presented here are the basic theory and the algorithms. Several examples are given. Results are consistent with those of the classical theories. Applications to spur gears are discussed.
De la Flor-Martínez, Maria; Galindo-Moreno, Pablo; Sánchez-Fernández, Elena; Piattelli, Adriano; Cobo, Manuel Jesus; Herrera-Viedma, Enrique
2016-10-01
The study of classic papers permits analysis of the past, present, and future of a specific area of knowledge. This type of analysis is becoming more frequent and more sophisticated. Our objective was to use the H-classics method, based on the h-index, to analyze classic papers in Implant Dentistry, Periodontics, and Oral Surgery (ID, P, and OS). First, an electronic search of documents related to ID, P, and OS was conducted in journals indexed in Journal Citation Reports (JCR) 2014 within the category 'Dentistry, Oral Surgery & Medicine'. Second, Web of Knowledge databases were searched using Mesh terms related to ID, P, and OS. Finally, the H-classics method was applied to select the classic articles in these disciplines, collecting data on associated research areas, document type, country, institutions, and authors. Of 267,611 documents related to ID, P, and OS retrieved from JCR journals (2014), 248 were selected as H-classics. They were published in 35 journals between 1953 and 2009, most frequently in the Journal of Clinical Periodontology (18.95%), the Journal of Periodontology (18.54%), International Journal of Oral and Maxillofacial Implants (9.27%), and Clinical Oral Implant Research (6.04%). These classic articles derived from the USA in 49.59% of cases and from Europe in 47.58%, while the most frequent host institution was the University of Gothenburg (17.74%) and the most frequent authors were J. Lindhe (10.48%) and S. Socransky (8.06%). The H-classics approach offers an objective method to identify core knowledge in clinical disciplines such as ID, P, and OS. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Methods for the calculation of axial wave numbers in lined ducts with mean flow
NASA Technical Reports Server (NTRS)
Eversman, W.
1981-01-01
A survey is made of the methods available for the calculation of axial wave numbers in lined ducts. Rectangular and circular ducts with both uniform and non-uniform flow are considered as are ducts with peripherally varying liners. A historical perspective is provided by a discussion of the classical methods for computing attenuation when no mean flow is present. When flow is present these techniques become either impractical or impossible. A number of direct eigenvalue determination schemes which have been used when flow is present are discussed. Methods described are extensions of the classical no-flow technique, perturbation methods based on the no-flow technique, direct integration methods for solution of the eigenvalue equation, an integration-iteration method based on the governing differential equation for acoustic transmission, Galerkin methods, finite difference methods, and finite element methods.
Use of FTA® classic cards for epigenetic analysis of sperm DNA.
Serra, Olga; Frazzi, Raffaele; Perotti, Alessio; Barusi, Lorenzo; Buschini, Annamaria
2018-02-01
FTA® technologies provide the most reliable method for DNA extraction. Although FTA technologies have been widely used for genetic analysis, there is no literature on their use for epigenetic analysis yet. We present for the first time, a simple method for quantitative methylation assessment based on sperm cells stored on Whatman FTA classic cards. Specifically, elution of seminal DNA from FTA classic cards was successfully tested with an elution buffer and an incubation step in a thermocycler. The eluted DNA was bisulfite converted, amplified by PCR, and a region of interest was pyrosequenced.
A new method of measuring gravitational acceleration in an undergraduate laboratory program
NASA Astrophysics Data System (ADS)
Wang, Qiaochu; Wang, Chang; Xiao, Yunhuan; Schulte, Jurgen; Shi, Qingfan
2018-01-01
This paper presents a high accuracy method to measure gravitational acceleration in an undergraduate laboratory program. The experiment is based on water in a cylindrical vessel rotating about its vertical axis at a constant speed. The water surface forms a paraboloid whose focal length is related to rotational period and gravitational acceleration. This experimental setup avoids classical source errors in determining the local value of gravitational acceleration, so prevalent in the common simple pendulum and inclined plane experiments. The presented method combines multiple physics concepts such as kinematics, classical mechanics and geometric optics, offering the opportunity for lateral as well as project-based learning.
SU-D-BRB-05: Quantum Learning for Knowledge-Based Response-Adaptive Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Naqa, I; Ten, R
Purpose: There is tremendous excitement in radiotherapy about applying data-driven methods to develop personalized clinical decisions for real-time response-based adaptation. However, classical statistical learning methods lack in terms of efficiency and ability to predict outcomes under conditions of uncertainty and incomplete information. Therefore, we are investigating physics-inspired machine learning approaches by utilizing quantum principles for developing a robust framework to dynamically adapt treatments to individual patient’s characteristics and optimize outcomes. Methods: We studied 88 liver SBRT patients with 35 on non-adaptive and 53 on adaptive protocols. Adaptation was based on liver function using a split-course of 3+2 fractions with amore » month break. The radiotherapy environment was modeled as a Markov decision process (MDP) of baseline and one month into treatment states. The patient environment was modeled by a 5-variable state represented by patient’s clinical and dosimetric covariates. For comparison of classical and quantum learning methods, decision-making to adapt at one month was considered. The MDP objective was defined by the complication-free tumor control (P{sup +}=TCPx(1-NTCP)). A simple regression model represented state-action mapping. Single bit in classical MDP and a qubit of 2-superimposed states in quantum MDP represented the decision actions. Classical decision selection was done using reinforcement Q-learning and quantum searching was performed using Grover’s algorithm, which applies uniform superposition over possible states and yields quadratic speed-up. Results: Classical/quantum MDPs suggested adaptation (probability amplitude ≥0.5) 79% of the time for splitcourses and 100% for continuous-courses. However, the classical MDP had an average adaptation probability of 0.5±0.22 while the quantum algorithm reached 0.76±0.28. In cases where adaptation failed, classical MDP yielded 0.31±0.26 average amplitude while the quantum approach averaged a more optimistic 0.57±0.4, but with high phase fluctuations. Conclusion: Our results demonstrate that quantum machine learning approaches provide a feasible and promising framework for real-time and sequential clinical decision-making in adaptive radiotherapy.« less
Tree Biomass Estimation of Chinese fir (Cunninghamia lanceolata) Based on Bayesian Method
Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass. PMID:24278198
Tree biomass estimation of Chinese fir (Cunninghamia lanceolata) based on Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation W = a(D2H)b was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass.
A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis
Kang, Mengjun
2015-01-01
A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691
Mioni, Roberto; Marega, Alessandra; Lo Cicero, Marco; Montanaro, Domenico
2016-11-01
The approach to acid-base chemistry in medicine includes several methods. Currently, the two most popular procedures are derived from Stewart's studies and from the bicarbonate/BE-based classical formulation. Another method, unfortunately little known, follows the Kildeberg theory applied to acid-base titration. By using the data produced by Dana Atchley in 1933, regarding electrolytes and blood gas analysis applied to diabetes, we compared the three aforementioned methods, in order to highlight their strengths and their weaknesses. The results obtained, by reprocessing the data of Atchley, have shown that Kildeberg's approach, unlike the other two methods, is consistent, rational and complete for describing the organ-physiological behavior of the hydrogen ion turnover in human organism. In contrast, the data obtained using the Stewart approach and the bicarbonate-based classical formulation are misleading and fail to specify which organs or systems are involved in causing or maintaining the diabetic acidosis. Stewart's approach, despite being considered 'quantitative', does not propose in any way the concept of 'an amount of acid' and becomes even more confusing, because it is not clear how to distinguish between 'strong' and 'weak' ions. As for Stewart's approach, the classical method makes no distinction between hydrogen ions managed by the intermediate metabolism and hydroxyl ions handled by the kidney, but, at least, it is based on the concept of titration (base-excess) and indirectly defines the concept of 'an amount of acid'. In conclusion, only Kildeberg's approach offers a complete understanding of the causes and remedies against any type of acid-base disturbance.
Young's moduli of carbon materials investigated by various classical molecular dynamics schemes
NASA Astrophysics Data System (ADS)
Gayk, Florian; Ehrens, Julian; Heitmann, Tjark; Vorndamme, Patrick; Mrugalla, Andreas; Schnack, Jürgen
2018-05-01
For many applications classical carbon potentials together with classical molecular dynamics are employed to calculate structures and physical properties of such carbon-based materials where quantum mechanical methods fail either due to the excessive size, irregular structure or long-time dynamics. Although such potentials, as for instance implemented in LAMMPS, yield reasonably accurate bond lengths and angles for several carbon materials such as graphene, it is not clear how accurate they are in terms of mechanical properties such as for instance Young's moduli. We performed large-scale classical molecular dynamics investigations of three carbon-based materials using the various potentials implemented in LAMMPS as well as the EDIP potential of Marks. We show how the Young's moduli vary with classical potentials and compare to experimental results. Since classical descriptions of carbon are bound to be approximations it is not astonishing that different realizations yield differing results. One should therefore carefully check for which observables a certain potential is suited. Our aim is to contribute to such a clarification.
Virtual reality as a method for evaluation and therapy after traumatic hand surgery.
Nica, Adriana Sarah; Brailescu, Consuela Monica; Scarlet, Rodica Gabriela
2013-01-01
In the last decade, Virtual Reality has encountered a continuous development concerning medical purposes and there are a lot of devices based on the classic "cyberglove" concept that are used as new therapeutic method for upper limb pathology, especially neurologic problems [1;2;3]. One of the VR devices is Pablo (Tyromotion), with very sensitive sensors that can measure the hand grip strenght and the pinch force, also the ROM (range of motion) for all the joints of the upper limb (shoulder, elbow, wrist) and offering the possibility of interactive games based on Virtual Reality concept with application in occupational therapy programs. We used Pablo in our study on patients with hand surgery as an objective tool for assessment and as additional therapeutic method to the classic Rehabilitation program [4;5]. The results of the study proved that Pablo represents a modern option for evaluation of hand deficits and dysfunctions, with objective measurement replacement of classic goniometry and dynamometry, with computerized data base of patients with monitoring of parameters during the recovery program and with better muscular and neuro-cognitive feedback during the interactive therapeutic modules.
Optimal and adaptive methods of processing hydroacoustic signals (review)
NASA Astrophysics Data System (ADS)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
Wen, Tingxi; Zhang, Zhongnan; Qiu, Ming; Zeng, Ming; Luo, Weizhen
2017-01-01
The computer mouse is an important human-computer interaction device. But patients with physical finger disability are unable to operate this device. Surface EMG (sEMG) can be monitored by electrodes on the skin surface and is a reflection of the neuromuscular activities. Therefore, we can control limbs auxiliary equipment by utilizing sEMG classification in order to help the physically disabled patients to operate the mouse. To develop a new a method to extract sEMG generated by finger motion and apply novel features to classify sEMG. A window-based data acquisition method was presented to extract signal samples from sEMG electordes. Afterwards, a two-dimensional matrix image based feature extraction method, which differs from the classical methods based on time domain or frequency domain, was employed to transform signal samples to feature maps used for classification. In the experiments, sEMG data samples produced by the index and middle fingers at the click of a mouse button were separately acquired. Then, characteristics of the samples were analyzed to generate a feature map for each sample. Finally, the machine learning classification algorithms (SVM, KNN, RBF-NN) were employed to classify these feature maps on a GPU. The study demonstrated that all classifiers can identify and classify sEMG samples effectively. In particular, the accuracy of the SVM classifier reached up to 100%. The signal separation method is a convenient, efficient and quick method, which can effectively extract the sEMG samples produced by fingers. In addition, unlike the classical methods, the new method enables to extract features by enlarging sample signals' energy appropriately. The classical machine learning classifiers all performed well by using these features.
Device-Independent Tests of Entropy
NASA Astrophysics Data System (ADS)
Chaves, Rafael; Brask, Jonatan Bohr; Brunner, Nicolas
2015-09-01
We show that the entropy of a message can be tested in a device-independent way. Specifically, we consider a prepare-and-measure scenario with classical or quantum communication, and develop two different methods for placing lower bounds on the communication entropy, given observable data. The first method is based on the framework of causal inference networks. The second technique, based on convex optimization, shows that quantum communication provides an advantage over classical communication, in the sense of requiring a lower entropy to reproduce given data. These ideas may serve as a basis for novel applications in device-independent quantum information processing.
NASA Astrophysics Data System (ADS)
La Cour, Brian R.; Ostrove, Corey I.
2017-01-01
This paper describes a novel approach to solving unstructured search problems using a classical, signal-based emulation of a quantum computer. The classical nature of the representation allows one to perform subspace projections in addition to the usual unitary gate operations. Although bandwidth requirements will limit the scale of problems that can be solved by this method, it can nevertheless provide a significant computational advantage for problems of limited size. In particular, we find that, for the same number of noisy oracle calls, the proposed subspace projection method provides a higher probability of success for finding a solution than does an single application of Grover's algorithm on the same device.
Unbiased estimators for spatial distribution functions of classical fluids
NASA Astrophysics Data System (ADS)
Adib, Artur B.; Jarzynski, Christopher
2005-01-01
We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.
Zonal wavefront reconstruction in quadrilateral geometry for phase measuring deflectometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Xue, Junpeng; Gao, Bo
2017-06-14
There are wide applications for zonal reconstruction methods in slope-based metrology due to its good capability of reconstructing the local details on surface profile. It was noticed in the literature that large reconstruction errors occur when using zonal reconstruction methods designed for rectangular geometry to process slopes in a quadrilateral geometry, which is a more general geometry with phase measuring deflectometry. In this paper, we present a new idea for the zonal methods for quadrilateral geometry. Instead of employing the intermediate slopes to set up height-slope equations, we consider the height increment as a more general connector to establish themore » height-slope relations for least-squares regression. The classical zonal methods and interpolation-assisted zonal methods are compared with our proposal. Results of both simulation and experiment demonstrate the effectiveness of the proposed idea. In implementation, the modification on the classical zonal methods is addressed. Finally, the new methods preserve many good aspects of the classical ones, such as the ability to handle a large incomplete slope dataset in an arbitrary aperture, and the low computational complexity comparable with the classical zonal method. Of course, the accuracy of the new methods is much higher when integrating the slopes in quadrilateral geometry.« less
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
Controlling lightwave in Riemann space by merging geometrical optics with transformation optics.
Liu, Yichao; Sun, Fei; He, Sailing
2018-01-11
In geometrical optical design, we only need to choose a suitable combination of lenses, prims, and mirrors to design an optical path. It is a simple and classic method for engineers. However, people cannot design fantastical optical devices such as invisibility cloaks, optical wormholes, etc. by geometrical optics. Transformation optics has paved the way for these complicated designs. However, controlling the propagation of light by transformation optics is not a direct design process like geometrical optics. In this study, a novel mixed method for optical design is proposed which has both the simplicity of classic geometrical optics and the flexibility of transformation optics. This mixed method overcomes the limitations of classic optical design; at the same time, it gives intuitive guidance for optical design by transformation optics. Three novel optical devices with fantastic functions have been designed using this mixed method, including asymmetrical transmissions, bidirectional focusing, and bidirectional cloaking. These optical devices cannot be implemented by classic optics alone and are also too complicated to be designed by pure transformation optics. Numerical simulations based on both the ray tracing method and full-wave simulation method are carried out to verify the performance of these three optical devices.
NASA Astrophysics Data System (ADS)
Ivanov, Sergey V.; Buzykin, Oleg G.
2016-12-01
A classical approach is applied to calculate pressure broadening coefficients of CO2 vibration-rotational spectral lines perturbed by Ar. Three types of spectra are examined: electric dipole (infrared) absorption; isotropic and anisotropic Raman Q branches. Simple and explicit formulae of the classical impact theory are used along with exact 3D Hamilton equations for CO2-Ar molecular motion. The calculations utilize vibrationally independent most accurate ab initio potential energy surface (PES) of Hutson et al. expanded in Legendre polynomial series up to lmax = 24. New improved algorithm of classical rotational frequency selection is applied. The dependences of CO2 half-widths on rotational quantum number J up to J=100 are computed for the temperatures between 77 and 765 K and compared with available experimental data as well as with the results of fully quantum dynamical calculations performed on the same PES. To make the picture complete, the predictions of two independent variants of the semi-classical Robert-Bonamy formalism for dipole absorption lines are included. This method. however, has demonstrated poor accuracy almost for all temperatures. On the contrary, classical broadening coefficients are in excellent agreement both with measurements and with quantum results at all temperatures. The classical impact theory in its present variant is capable to produce quickly and accurately the pressure broadening coefficients of spectral lines of linear molecules for any J value (including high Js) using full-dimensional ab initio - based PES in the cases where other computational methods are either extremely time consuming (like the quantum close coupling method) or give erroneous results (like semi-classical methods).
NASA Astrophysics Data System (ADS)
Kreis, Karsten; Kremer, Kurt; Potestio, Raffaello; Tuckerman, Mark E.
2017-12-01
Path integral-based methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, including path-integral molecular dynamics, which allows for the calculation of quantum statistical properties, and ring-polymer and centroid molecular dynamics, which allow the calculation of approximate quantum dynamical properties. To this end, we derive a new integration algorithm that also makes use of multiple time-stepping. The scheme is validated via adaptive classical-path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins.
Construction of the Second Quito Astrolabe Catalogue
NASA Astrophysics Data System (ADS)
Kolesnik, Y. B.
1994-03-01
A method for astrolabe catalogue construction is presented. It is based on classical concepts, but the model of conditional equations for the group reduction is modified, additional parameters being introduced in the step- wise regressions. The chain adjustment is neglected, and the advantages of this approach are discussed. The method has been applied to the data obtained with the astrolabe of the Quito Astronomical Observatory from 1964 to 1983. Various characteristics of the catalogue produced with this method are compared with those due to the rigorous classical method. Some improvement both in systematic and random errors is outlined.
A Laplacian based image filtering using switching noise detector.
Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar
2015-01-01
This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.
Sriwastava, Brijesh Kumar; Basu, Subhadip; Maulik, Ujjwal
2015-10-01
Protein-protein interaction (PPI) site prediction aids to ascertain the interface residues that participate in interaction processes. Fuzzy support vector machine (F-SVM) is proposed as an effective method to solve this problem, and we have shown that the performance of the classical SVM can be enhanced with the help of an interaction-affinity based fuzzy membership function. The performances of both SVM and F-SVM on the PPI databases of the Homo sapiens and E. coli organisms are evaluated and estimated the statistical significance of the developed method over classical SVM and other fuzzy membership-based SVM methods available in the literature. Our membership function uses the residue-level interaction affinity scores for each pair of positive and negative sequence fragments. The average AUC scores in the 10-fold cross-validation experiments are measured as 79.94% and 80.48% for the Homo sapiens and E. coli organisms respectively. On the independent test datasets, AUC scores are obtained as 76.59% and 80.17% respectively for the two organisms. In almost all cases, the developed F-SVM method improves the performances obtained by the corresponding classical SVM and the other classifiers, available in the literature.
Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen
2018-01-01
The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible. PMID:29695041
Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen
2018-04-24
The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible.
Stability analysis of spacecraft power systems
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.; Sheble, G. B.; Nelms, R. M.
1990-01-01
The problems in applying standard electric utility models, analyses, and algorithms to the study of the stability of spacecraft power conditioning and distribution systems are discussed. Both single-phase and three-phase systems are considered. Of particular concern are the load and generator models that are used in terrestrial power system studies, as well as the standard assumptions of load and topological balance that lead to the use of the positive sequence network. The standard assumptions regarding relative speeds of subsystem dynamic responses that are made in the classical transient stability algorithm, which forms the backbone of utility-based studies, are examined. The applicability of these assumptions to a spacecraft power system stability study is discussed in detail. In addition to the classical indirect method, the applicability of Liapunov's direct methods to the stability determination of spacecraft power systems is discussed. It is pointed out that while the proposed method uses a solution process similar to the classical algorithm, the models used for the sources, loads, and networks are, in general, more accurate. Some preliminary results are given for a linear-graph, state-variable-based modeling approach to the study of the stability of space-based power distribution networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shalashilin, Dmitrii V.; Burghardt, Irene
2008-08-28
In this article, two coherent-state based methods of quantum propagation, namely, coupled coherent states (CCS) and Gaussian-based multiconfiguration time-dependent Hartree (G-MCTDH), are put on the same formal footing, using a derivation from a variational principle in Lagrangian form. By this approach, oscillations of the classical-like Gaussian parameters and oscillations of the quantum amplitudes are formally treated in an identical fashion. We also suggest a new approach denoted here as coupled coherent states trajectories (CCST), which completes the family of Gaussian-based methods. Using the same formalism for all related techniques allows their systematization and a straightforward comparison of their mathematical structuremore » and cost.« less
Integrability in AdS/CFT correspondence: quasi-classical analysis
NASA Astrophysics Data System (ADS)
Gromov, Nikolay
2009-06-01
In this review, we consider a quasi-classical method applicable to integrable field theories which is based on a classical integrable structure—the algebraic curve. We apply it to the Green-Schwarz superstring on the AdS5 × S5 space. We show that the proposed method reproduces perfectly the earlier results obtained by expanding the string action for some simple classical solutions. The construction is explicitly covariant and is not based on a particular parameterization of the fields and as a result is free from ambiguities. On the other hand, the finite size corrections in some particularly important scaling limit are studied in this paper for a system of Bethe equations. For the general superalgebra \\su(N|K) , the result for the 1/L corrections is obtained. We find an integral equation which describes these corrections in a closed form. As an application, we consider the conjectured Beisert-Staudacher (BS) equations with the Hernandez-Lopez dressing factor where the finite size corrections should reproduce quasi-classical results around a general classical solution. Indeed, we show that our integral equation can be interpreted as a sum of all physical fluctuations and thus prove the complete one-loop consistency of the BS equations. We demonstrate that any local conserved charge (including the AdS energy) computed from the BS equations is indeed given at one loop by the sum of the charges of fluctuations with an exponential precision for large S5 angular momentum of the string. As an independent result, the BS equations in an \\su(2) sub-sector were derived from Zamolodchikovs's S-matrix. The paper is based on the author's PhD thesis.
NASA Astrophysics Data System (ADS)
Xu, Yang; Song, Kai; Shi, Qiang
2018-03-01
The hydride transfer reaction catalyzed by dihydrofolate reductase is studied using a recently developed mixed quantum-classical method to investigate the nuclear quantum effects on the reaction. Molecular dynamics simulation is first performed based on a two-state empirical valence bond potential to map the atomistic model to an effective double-well potential coupled to a harmonic bath. In the mixed quantum-classical simulation, the hydride degree of freedom is quantized, and the effective harmonic oscillator modes are treated classically. It is shown that the hydride transfer reaction rate using the mapped effective double-well/harmonic-bath model is dominated by the contribution from the ground vibrational state. Further comparison with the adiabatic reaction rate constant based on the Kramers theory confirms that the reaction is primarily vibrationally adiabatic, which agrees well with the high transmission coefficients found in previous theoretical studies. The calculated kinetic isotope effect is also consistent with the experimental and recent theoretical results.
NASA Astrophysics Data System (ADS)
Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.
2012-01-01
The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.
Cerruela García, G; García-Pedrajas, N; Luque Ruiz, I; Gómez-Nieto, M Á
2018-03-01
This paper proposes a method for molecular activity prediction in QSAR studies using ensembles of classifiers constructed by means of two supervised subspace projection methods, namely nonparametric discriminant analysis (NDA) and hybrid discriminant analysis (HDA). We studied the performance of the proposed ensembles compared to classical ensemble methods using four molecular datasets and eight different models for the representation of the molecular structure. Using several measures and statistical tests for classifier comparison, we observe that our proposal improves the classification results with respect to classical ensemble methods. Therefore, we show that ensembles constructed using supervised subspace projections offer an effective way of creating classifiers in cheminformatics.
Anesthetic level prediction using a QCM based E-nose.
Saraoğlu, H M; Ozmen, A; Ebeoğlu, M A
2008-06-01
Anesthetic level measurement is a real time process. This paper presents a new method to measure anesthesia level in surgery rooms at hospitals using a QCM based E-Nose. The E-Nose system contains an array of eight different coated QCM sensors. In this work, the best linear reacting sensor is selected from the array and used in the experiments. Then, the sensor response time was observed about 15 min using classic method, which is impractical for on-line anesthetic level detection during a surgery. Later, the sensor transition data is analyzed to reach a decision earlier than the classical method. As a result, it is found out that the slope of transition data gives valuable information to predict the anesthetic level. With this new method, we achieved to find correct anesthetic levels within 100 s.
A new physical method to assess handle properties of fabrics made from wood-based fibers
NASA Astrophysics Data System (ADS)
Abu-Rous, M.; Liftinger, E.; Innerlohinger, J.; Malengier, B.; Vasile, S.
2017-10-01
In this work, the handfeel of fabrics made of wood-based fibers such as viscose, modal and Lyocell was investigated in relation to cotton fabrics applying the Tissue Softness Analyzer (TSA) method in comparison to other classical methods. Two different construction groups of textile were investigated. The validity of TSA in assessing textile softness of these constructions was tested. TSA results were compared to human hand evaluation as well as to classical physical measurements like drape coefficient, ring pull-through and Handle-o-meter, as well as a newer device, the Fabric Touch Tester (FTT). Physical methods as well as human hand assessments mostly agreed on the softest and smoothest range, but showed different rankings in the harder/rougher side fabrics. TSA ranking of softness and smoothness corresponded to the rankings by other physical methods as well as with human hand feel for the basic textile constructions.
Cho, Pyo Yun; Na, Byoung-Kuk; Mi Choi, Kyung; Kim, Jin Su; Cho, Shin-Hyeong; Lee, Won-Ja; Lim, Sung-Bin; Cha, Seok Ho; Park, Yun-Kyu; Pak, Jhang Ho; Lee, Hyeong-Woo; Hong, Sung-Jong; Kim, Tong-Soo
2013-01-01
Microscopic examination of eggs of parasitic helminths in stool samples has been the most widely used classical diagnostic method for infections, but tiny and low numbers of eggs in stool samples often hamper diagnosis of helminthic infections with classical microscopic examination. Moreover, it is also difficult to differentiate parasite eggs by the classical method, if they have similar morphological characteristics. In this study, we developed a rapid and sensitive polymerase chain reaction (PCR)-based molecular diagnostic method for detection of Clonorchis sinensis eggs in stool samples. Nine primers were designed based on the long-terminal repeat (LTR) of C. sinensis retrotransposon1 (CsRn1) gene, and seven PCR primer sets were paired. Polymerase chain reaction with each primer pair produced specific amplicons for C. sinensis, but not for other trematodes including Metagonimus yokogawai and Paragonimus westermani. Particularly, three primer sets were able to detect 10 C. sinensis eggs and were applicable to amplify specific amplicons from DNA samples purified from stool of C. sinensis-infected patients. This PCR method could be useful for diagnosis of C. sinensis infections in human stool samples with a high level of specificity and sensitivity. PMID:23916334
Zhang, Yong; Shi, Chaojun; Brennecke, Joan F; Maginn, Edward J
2014-06-12
A combined classical molecular dynamics (MD) and ab initio MD (AIMD) method was developed for the calculation of electrochemical windows (ECWs) of ionic liquids. In the method, the liquid phase of ionic liquid is explicitly sampled using classical MD. The electrochemical window, estimated by the energy difference between the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO), is calculated at the density functional theory (DFT) level based on snapshots obtained from classical MD trajectories. The snapshots were relaxed using AIMD and quenched to their local energy minima, which assures that the HOMO/LUMO calculations are based on stable configurations on the same potential energy surface. The new procedure was applied to a group of ionic liquids for which the ECWs were also experimentally measured in a self-consistent manner. It was found that the predicted ECWs not only agree with the experimental trend very well but also the values are quantitatively accurate. The proposed method provides an efficient way to compare ECWs of ionic liquids in the same context, which has been difficult in experiments or simulation due to the fact that ECW values sensitively depend on experimental setup and conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nimbalkar, Sachin U.; Wenning, Thomas J.; Guo, Wei
In the United States, manufacturing facilities account for about 32% of total domestic energy consumption in 2014. Robust energy tracking methodologies are critical to understanding energy performance in manufacturing facilities. Due to its simplicity and intuitiveness, the classic energy intensity method (i.e. the ratio of total energy use over total production) is the most widely adopted. However, the classic energy intensity method does not take into account the variation of other relevant parameters (i.e. product type, feed stock type, weather, etc.). Furthermore, the energy intensity method assumes that the facilities’ base energy consumption (energy use at zero production) is zero,more » which rarely holds true. Therefore, it is commonly recommended to utilize regression models rather than the energy intensity approach for tracking improvements at the facility level. Unfortunately, many energy managers have difficulties understanding why regression models are statistically better than utilizing the classic energy intensity method. While anecdotes and qualitative information may convince some, many have major reservations about the accuracy of regression models and whether it is worth the time and effort to gather data and build quality regression models. This paper will explain why regression models are theoretically and quantitatively more accurate for tracking energy performance improvements. Based on the analysis of data from 114 manufacturing plants over 12 years, this paper will present quantitative results on the importance of utilizing regression models over the energy intensity methodology. This paper will also document scenarios where regression models do not have significant relevance over the energy intensity method.« less
Topological and Orthomodular Modeling of Context in Behavioral Science
NASA Astrophysics Data System (ADS)
Narens, Louis
2017-02-01
Two non-boolean methods are discussed for modeling context in behavioral data and theory. The first is based on intuitionistic logic, which is similar to classical logic except that not every event has a complement. Its probability theory is also similar to classical probability theory except that the definition of probability function needs to be generalized to unions of events instead of applying only to unions of disjoint events. The generalization is needed, because intuitionistic event spaces may not contain enough disjoint events for the classical definition to be effective. The second method develops a version of quantum logic for its underlying probability theory. It differs from Hilbert space logic used in quantum mechanics as a foundation for quantum probability theory in variety of ways. John von Neumann and others have commented about the lack of a relative frequency approach and a rational foundation for this probability theory. This article argues that its version of quantum probability theory does not have such issues. The method based on intuitionistic logic is useful for modeling cognitive interpretations that vary with context, for example, the mood of the decision maker, the context produced by the influence of other items in a choice experiment, etc. The method based on this article's quantum logic is useful for modeling probabilities across contexts, for example, how probabilities of events from different experiments are related.
[Thought and method of classic formulae in treatment of chronic cough].
Su, Ke-Lei; Zhang, Ye-Qing
2018-06-01
Chronic cough is a common clinical disease with complex etiology, which is easily misdiagnosed and mistreated. Chronic cough guideline has been developed based on the modern anatomical etiology classification, and it may improve the level of diagnosis and treatment. Common causes of chronic cough are as follows: cough variant asthma, upper airway cough syndrome, eosinophilic bronchitis, gastroesophageal reflux-related cough, post-infectious cough, etc. There is a long history and rich experience in treatment of cough in traditional Chinese medicine which is characterized by syndrome differentiation. The four elements of pathogenesis for chronic cough include wind, phlegm, fire, and deficiency. Classic formula is widely used in the treatment of chronic cough, and the focus is on prescriptions corresponding to syndromes. This article attempts to explore the thought and method of classic formulae in treatment of chronic cough based on three perspectives: differentiation of etiology, pathogenesis and formula-syndrome. Three medical cases are selected at last in order to prove its correction. Copyright© by the Chinese Pharmaceutical Association.
Frequent statistics of link-layer bit stream data based on AC-IM algorithm
NASA Astrophysics Data System (ADS)
Cao, Chenghong; Lei, Yingke; Xu, Yiming
2017-08-01
At present, there are many relevant researches on data processing using classical pattern matching and its improved algorithm, but few researches on statistical data of link-layer bit stream. This paper adopts a frequent statistical method of link-layer bit stream data based on AC-IM algorithm for classical multi-pattern matching algorithms such as AC algorithm has high computational complexity, low efficiency and it cannot be applied to binary bit stream data. The method's maximum jump distance of the mode tree is length of the shortest mode string plus 3 in case of no missing? In this paper, theoretical analysis is made on the principle of algorithm construction firstly, and then the experimental results show that the algorithm can adapt to the binary bit stream data environment and extract the frequent sequence more accurately, the effect is obvious. Meanwhile, comparing with the classical AC algorithm and other improved algorithms, AC-IM algorithm has a greater maximum jump distance and less time-consuming.
A robust interpolation method for constructing digital elevation models from remote sensing data
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Liu, Fengying; Li, Yanyan; Yan, Changqing; Liu, Guolin
2016-09-01
A digital elevation model (DEM) derived from remote sensing data often suffers from outliers due to various reasons such as the physical limitation of sensors and low contrast of terrain textures. In order to reduce the effect of outliers on DEM construction, a robust algorithm of multiquadric (MQ) methodology based on M-estimators (MQ-M) was proposed. MQ-M adopts an adaptive weight function with three-parts. The weight function is null for large errors, one for small errors and quadric for others. A mathematical surface was employed to comparatively analyze the robustness of MQ-M, and its performance was compared with those of the classical MQ and a recently developed robust MQ method based on least absolute deviation (MQ-L). Numerical tests show that MQ-M is comparative to the classical MQ and superior to MQ-L when sample points follow normal and Laplace distributions, and under the presence of outliers the former is more accurate than the latter. A real-world example of DEM construction using stereo images indicates that compared with the classical interpolation methods, such as natural neighbor (NN), ordinary kriging (OK), ANUDEM, MQ-L and MQ, MQ-M has a better ability of preserving subtle terrain features. MQ-M replaces thin plate spline for reference DEM construction to assess the contribution to our recently developed multiresolution hierarchical classification method (MHC). Classifying the 15 groups of benchmark datasets provided by the ISPRS Commission demonstrates that MQ-M-based MHC is more accurate than MQ-L-based and TPS-based MHCs. MQ-M has high potential for DEM construction.
Classical and neural methods of image sequence interpolation
NASA Astrophysics Data System (ADS)
Skoneczny, Slawomir; Szostakowski, Jaroslaw
2001-08-01
An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.
Application of Classical and Lie Transform Methods to Zonal Perturbation in the Artificial Satellite
NASA Astrophysics Data System (ADS)
San-Juan, J. F.; San-Martin, M.; Perez, I.; Lopez-Ochoa, L. M.
2013-08-01
A scalable second-order analytical orbit propagator program is being carried out. This analytical orbit propagator combines modern perturbation methods, based on the canonical frame of the Lie transform, and classical perturbation methods in function of orbit types or the requirements needed for a space mission, such as catalog maintenance operations, long period evolution, and so on. As a first step on the validation of part of our orbit propagator, in this work we only consider the perturbation produced by zonal harmonic coefficients in the Earth's gravity potential, so that it is possible to analyze the behaviour of the perturbation methods involved in the corresponding analytical theories.
de Bock, Élodie; Hardouin, Jean-Benoit; Blanchin, Myriam; Le Neel, Tanguy; Kubis, Gildas; Bonnaud-Antignac, Angélique; Dantan, Étienne; Sébille, Véronique
2016-10-01
The objective was to compare classical test theory and Rasch-family models derived from item response theory for the analysis of longitudinal patient-reported outcomes data with possibly informative intermittent missing items. A simulation study was performed in order to assess and compare the performance of classical test theory and Rasch model in terms of bias, control of the type I error and power of the test of time effect. The type I error was controlled for classical test theory and Rasch model whether data were complete or some items were missing. Both methods were unbiased and displayed similar power with complete data. When items were missing, Rasch model remained unbiased and displayed higher power than classical test theory. Rasch model performed better than the classical test theory approach regarding the analysis of longitudinal patient-reported outcomes with possibly informative intermittent missing items mainly for power. This study highlights the interest of Rasch-based models in clinical research and epidemiology for the analysis of incomplete patient-reported outcomes data. © The Author(s) 2013.
Quantum-classical correspondence in the vicinity of periodic orbits
NASA Astrophysics Data System (ADS)
Kumari, Meenu; Ghose, Shohini
2018-05-01
Quantum-classical correspondence in chaotic systems is a long-standing problem. We describe a method to quantify Bohr's correspondence principle and calculate the size of quantum numbers for which we can expect to observe quantum-classical correspondence near periodic orbits of Floquet systems. Our method shows how the stability of classical periodic orbits affects quantum dynamics. We demonstrate our method by analyzing quantum-classical correspondence in the quantum kicked top (QKT), which exhibits both regular and chaotic behavior. We use our correspondence conditions to identify signatures of classical bifurcations even in a deep quantum regime. Our method can be used to explain the breakdown of quantum-classical correspondence in chaotic systems.
Monteiro, C A
1991-01-01
Two methods for estimating the prevalence of growth retardation in a population are evaluated: the classical method, which is based on the proportion of children whose height is more than 2 standard deviations below the expected mean of a reference population; and a new method recently proposed by Mora, which is based on the whole height distribution of observed and reference populations. Application of the classical method to several simulated populations leads to the conclusion that in most situations in developing countries the prevalence of growth retardation is grossly underestimated, and reflects only the presence of severe growth deficits. A second constraint with this method is a marked reduction of the relative differentials between more and less exposed strata. Application of Mora's method to the same simulated populations reduced but did not eliminate these constraints. A novel method for estimating the prevalence of growth retardation, which is based also on the whole height distribution of observed and reference populations, is also described and evaluated. This method produces better estimates of the true prevalence of growth retardation with no reduction in relative differentials.
Modification of Classical SPM for Slightly Rough Surface Scattering with Low Grazing Angle Incidence
NASA Astrophysics Data System (ADS)
Guo, Li-Xin; Wei, Guo-Hui; Kim, Cheyoung; Wu, Zhen-Sen
2005-11-01
Based on the impedance/admittance rough boundaries, the reflection coefficients and the scattering cross section with low grazing angle incidence are obtained for both VV and HH polarizations. The error of the classical perturbation method at grazing angle is overcome for the vertical polarization at a rough Neumann boundary of infinite extent. The derivation of the formulae and the numerical results show that the backscattering cross section depends on the grazing angle to the fourth power for both Neumann and Dirichlet boundary conditions with low grazing angle incidence. Our results can reduce to that of the classical small perturbation method by neglecting the Neumann and Dirichlet boundary conditions. The project supported by National Natural Science Foundation of China under Grant No. 60101001 and the National Defense Foundation of China
Classical methods and modern analysis for studying fungal diversity
John Paul Schmit
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
Classical Methods and Modern Analysis for Studying Fungal Diversity
J. P. Schmit; D. J. Lodge
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
Depeursinge, Adrien; Chin, Anne S.; Leung, Ann N.; Terrone, Donato; Bristow, Michael; Rosen, Glenn; Rubin, Daniel L.
2014-01-01
Objectives We propose a novel computational approach for the automated classification of classic versus atypical usual interstitial pneumonia (UIP). Materials and Methods 33 patients with UIP were enrolled in this study. They were classified as classic versus atypical UIP by a consensus of two thoracic radiologists with more than 15 years of experience using the American Thoracic Society evidence–based guidelines for CT diagnosis of UIP. Two cardiothoracic fellows with one year of subspecialty training provided independent readings. The system is based on regional characterization of the morphological tissue properties of lung using volumetric texture analysis of multiple detector CT images. A simple digital atlas with 36 lung subregions is used to locate texture properties, from which the responses of multi-directional Riesz wavelets are obtained. Machine learning is used to aggregate and to map the regional texture attributes to a simple score that can be used to stratify patients with UIP into classic and atypical subtypes. Results We compared the predictions based on regional volumetric texture analysis with the ground truth established by expert consensus. The area under the receiver operating characteristic curve of the proposed score was estimated to be 0.81 using a leave-one-patient-out cross-validation, with high specificity for classic UIP. The performance of our automated method was found to be similar to that of the two fellows and to the agreement between experienced chest radiologists reported in the literature. However, the errors of our method and the fellows occurred on different cases, which suggests that combining human and computerized evaluations may be synergistic. Conclusions Our results are encouraging and suggest that an automated system may be useful in routine clinical practice as a diagnostic aid for identifying patients with complex lung disease such as classic UIP, obviating the need for invasive surgical lung biopsy and its associated risks. PMID:25551822
Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests
NASA Astrophysics Data System (ADS)
Shumway, R. H.
2001-10-01
- The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.
Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests
NASA Astrophysics Data System (ADS)
Shumway, R. H.
The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.
NASA Astrophysics Data System (ADS)
Zhou, Chi-Chun; Dai, Wu-Sheng
2018-02-01
In statistical mechanics, for a system with a fixed number of particles, e.g. a finite-size system, strictly speaking, the thermodynamic quantity needs to be calculated in the canonical ensemble. Nevertheless, the calculation of the canonical partition function is difficult. In this paper, based on the mathematical theory of the symmetric function, we suggest a method for the calculation of the canonical partition function of ideal quantum gases, including ideal Bose, Fermi, and Gentile gases. Moreover, we express the canonical partition functions of interacting classical and quantum gases given by the classical and quantum cluster expansion methods in terms of the Bell polynomial in mathematics. The virial coefficients of ideal Bose, Fermi, and Gentile gases are calculated from the exact canonical partition function. The virial coefficients of interacting classical and quantum gases are calculated from the canonical partition function by using the expansion of the Bell polynomial, rather than calculated from the grand canonical potential.
Cisgenesis strongly improves introgression breeding and induced translocation breeding of plants.
Jacobsen, Evert; Schouten, Henk J
2007-05-01
There are two ways for genetic improvement in classical plant breeding: crossing and mutation. Plant varieties can also be improved through genetic modification; however, the present GMO regulations are based on risk assessments with the transgenes coming from non-crossable species. Nowadays, DNA sequence information of crop plants facilitates the isolation of cisgenes, which are genes from crop plants themselves or from crossable species. The increasing number of these isolated genes, and the development of transformation protocols that do not leave marker genes behind, provide an opportunity to improve plant breeding while remaining within the gene pool of the classical breeder. Compared with induced translocation and introgression breeding, cisgenesis is an improvement for gene transfer from crossable plants: it is a one-step gene transfer without linkage drag of other genes, whereas induced translocation and introgression breeding are multiple step gene transfer methods with linkage drag. The similarity of the genes used in cisgenesis compared with classical breeding is a compelling argument to treat cisgenic plants as classically bred plants. In the case of the classical breeding method induced translocation breeding, the insertion site of the genes is a priori unknown, as it is in cisgenesis. This provides another argument to treat cisgenic plants as classically bred plants, by exempting cisgenesis of plants from the GMO legislations.
A novel word spotting method based on recurrent neural networks.
Frinken, Volkmar; Fischer, Andreas; Manmatha, R; Bunke, Horst
2012-02-01
Keyword spotting refers to the process of retrieving all instances of a given keyword from a document. In the present paper, a novel keyword spotting method for handwritten documents is described. It is derived from a neural network-based system for unconstrained handwriting recognition. As such it performs template-free spotting, i.e., it is not necessary for a keyword to appear in the training set. The keyword spotting is done using a modification of the CTC Token Passing algorithm in conjunction with a recurrent neural network. We demonstrate that the proposed systems outperform not only a classical dynamic time warping-based approach but also a modern keyword spotting system, based on hidden Markov models. Furthermore, we analyze the performance of the underlying neural networks when using them in a recognition task followed by keyword spotting on the produced transcription. We point out the advantages of keyword spotting when compared to classic text line recognition.
NASA Astrophysics Data System (ADS)
Rakovic, D.; Dugic, M.
2005-05-01
Quantum bases of consciousness are considered with psychosomatic implications of three front lines of psychosomatic medicine (hesychastic spirituality, holistic Eastern medicine, and symptomatic Western medicine), as well as cognitive implications of two modes of individual consciousness (quantum-coherent transitional and altered states, and classically reduced normal states) alongside with conditions of transformations of one mode into another (considering consciousness quantum-coherence/classical-decoherence acupuncture system/nervous system interaction, direct and reverse, with and without threshold limits, respectively) - by using theoretical methods of associative neural networks and quantum neural holography combined with quantum decoherence theory.
Quantum theory of multiscale coarse-graining.
Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A
2018-03-14
Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.
Equilibrium Solutions of the Logarithmic Hamiltonian Leapfrog for the N-body Problem
NASA Astrophysics Data System (ADS)
Minesaki, Yukitaka
2018-04-01
We prove that a second-order logarithmic Hamiltonian leapfrog for the classical general N-body problem (CGNBP) designed by Mikkola and Tanikawa and some higher-order logarithmic Hamiltonian methods based on symmetric multicompositions of the logarithmic algorithm exactly reproduce the orbits of elliptic relative equilibrium solutions in the original CGNBP. These methods are explicit symplectic methods. Before this proof, only some implicit discrete-time CGNBPs proposed by Minesaki had been analytically shown to trace the orbits of elliptic relative equilibrium solutions. The proof is therefore the first existence proof for explicit symplectic methods. Such logarithmic Hamiltonian methods with a variable time step can also precisely retain periodic orbits in the classical general three-body problem, which generic numerical methods with a constant time step cannot do.
Semiclassical evaluation of quantum fidelity
NASA Astrophysics Data System (ADS)
Vanicek, Jiri
2004-03-01
We present a numerically feasible semiclassical method to evaluate quantum fidelity (Loschmidt echo) in a classically chaotic system. It was thought that such evaluation would be intractable, but instead we show that a uniform semiclassical expression not only is tractable but it gives remarkably accurate numerical results for the standard map in both the Fermi-golden-rule and Lyapunov regimes. Because it allows a Monte-Carlo evaluation, this uniform expression is accurate at times where there are 10^70 semiclassical contributions. Remarkably, the method also explicitly contains the ``building blocks'' of analytical theories of recent literature, and thus permits a direct test of approximations made by other authors in these regimes, rather than an a posteriori comparison with numerical results. We explain in more detail the extended validity of the classical perturbation approximation and thus provide a ``defense" of the linear response theory from the famous Van Kampen objection. We point out the potential use of our uniform expression in other areas because it gives a most direct link between the quantum Feynman propagator based on the path integral and the semiclassical Van Vleck propagator based on the sum over classical trajectories. Finally, we test the applicability of our method in integrable and mixed systems.
Classical Dynamics of Fullerenes
NASA Astrophysics Data System (ADS)
Sławianowski, Jan J.; Kotowski, Romuald K.
2017-06-01
The classical mechanics of large molecules and fullerenes is studied. The approach is based on the model of collective motion of these objects. The mixed Lagrangian (material) and Eulerian (space) description of motion is used. In particular, the Green and Cauchy deformation tensors are geometrically defined. The important issue is the group-theoretical approach to describing the affine deformations of the body. The Hamiltonian description of motion based on the Poisson brackets methodology is used. The Lagrange and Hamilton approaches allow us to formulate the mechanics in the canonical form. The method of discretization in analytical continuum theory and in classical dynamics of large molecules and fullerenes enable us to formulate their dynamics in terms of the polynomial expansions of configurations. Another approach is based on the theory of analytical functions and on their approximations by finite-order polynomials. We concentrate on the extremely simplified model of affine deformations or on their higher-order polynomial perturbations.
NASA Astrophysics Data System (ADS)
Holloway, Stephen
1997-03-01
When performing molecular dynamical simulations on light systems at low energies, there is always the risk of producing data that bear no similarity to experiment. Indeed, John Barker himself was particularly anxious about treating Ar scattering from surfaces using classical mechanics where it had been shown experimentally in his own lab that diffraction occurs. In such cases, the correct procedure is probably to play the trump card "... well of course, quantum effects will modify this so that....." and retire gracefully. For our particular interests, the tables are turned in that we are interested in gas-surface dynamical studies for highly quantized systems, but would be interested to know when it is possible to use classical mechanics in order that a greater dimensionality might be treated. For molecular dissociation and scattering, it has been oft quoted that the greater the number of degrees of freedom, the more appropriate is classical mechanics, primarily because of the mass averaging over the quantized dimensions. Is this true? We have been investigating the dissociation of hydrogen molecules at surfaces and in this talk I will present quantum results for dissociation and scattering, along with a novel method for their interpretation based upon adiabatic potential energy surfaces. Comparison with classical calculations will be made and conclusions drawn. a novel method for their interpretation based upon adiabatic potential energy surfaces
Shearlet Features for Registration of Remotely Sensed Multitemporal Images
NASA Technical Reports Server (NTRS)
Murphy, James M.; Le Moigne, Jacqueline
2015-01-01
We investigate the role of anisotropic feature extraction methods for automatic image registration of remotely sensed multitemporal images. Building on the classical use of wavelets in image registration, we develop an algorithm based on shearlets, a mathematical generalization of wavelets that offers increased directional sensitivity. Initial experimental results on LANDSAT images are presented, which indicate superior performance of the shearlet algorithm when compared to classical wavelet algorithms.
The Use of Problem-Based Learning Model to Improve Quality Learning Students Morals
ERIC Educational Resources Information Center
Nurzaman
2017-01-01
Model of moral cultivation in MTsN Bangunharja done using three methods, classical cultivation methods, extra-curricular activities in the form of religious activities, scouting, sports, and Islamic art, and habituation of morals. Problem base learning models in MTsN Bangunharja applied using the following steps: find the problem, define the…
Fully adaptive propagation of the quantum-classical Liouville equation
NASA Astrophysics Data System (ADS)
Horenko, Illia; Weiser, Martin; Schmidt, Burkhard; Schütte, Christof
2004-05-01
In mixed quantum-classical molecular dynamics few but important degrees of freedom of a dynamical system are modeled quantum-mechanically while the remaining ones are treated within the classical approximation. Rothe methods established in the theory of partial differential equations are used to control both temporal and spatial discretization errors on grounds of a global tolerance criterion. The TRAIL (trapezoidal rule for adaptive integration of Liouville dynamics) scheme [I. Horenko and M. Weiser, J. Comput. Chem. 24, 1921 (2003)] has been extended to account for nonadiabatic effects in molecular dynamics described by the quantum-classical Liouville equation. In the context of particle methods, the quality of the spatial approximation of the phase-space distributions is maximized while the numerical condition of the least-squares problem for the parameters of particles is minimized. The resulting dynamical scheme is based on a simultaneous propagation of moving particles (Gaussian and Dirac deltalike trajectories) in phase space employing a fully adaptive strategy to upgrade Dirac to Gaussian particles and, vice versa, downgrading Gaussians to Dirac-type trajectories. This allows for the combination of Monte-Carlo-based strategies for the sampling of densities and coherences in multidimensional problems with deterministic treatment of nonadiabatic effects. Numerical examples demonstrate the application of the method to spin-boson systems in different dimensionality. Nonadiabatic effects occurring at conical intersections are treated in the diabatic representation. By decreasing the global tolerance, the numerical solution obtained from the TRAIL scheme are shown to converge towards exact results.
Fully adaptive propagation of the quantum-classical Liouville equation.
Horenko, Illia; Weiser, Martin; Schmidt, Burkhard; Schütte, Christof
2004-05-15
In mixed quantum-classical molecular dynamics few but important degrees of freedom of a dynamical system are modeled quantum-mechanically while the remaining ones are treated within the classical approximation. Rothe methods established in the theory of partial differential equations are used to control both temporal and spatial discretization errors on grounds of a global tolerance criterion. The TRAIL (trapezoidal rule for adaptive integration of Liouville dynamics) scheme [I. Horenko and M. Weiser, J. Comput. Chem. 24, 1921 (2003)] has been extended to account for nonadiabatic effects in molecular dynamics described by the quantum-classical Liouville equation. In the context of particle methods, the quality of the spatial approximation of the phase-space distributions is maximized while the numerical condition of the least-squares problem for the parameters of particles is minimized. The resulting dynamical scheme is based on a simultaneous propagation of moving particles (Gaussian and Dirac deltalike trajectories) in phase space employing a fully adaptive strategy to upgrade Dirac to Gaussian particles and, vice versa, downgrading Gaussians to Dirac-type trajectories. This allows for the combination of Monte-Carlo-based strategies for the sampling of densities and coherences in multidimensional problems with deterministic treatment of nonadiabatic effects. Numerical examples demonstrate the application of the method to spin-boson systems in different dimensionality. Nonadiabatic effects occurring at conical intersections are treated in the diabatic representation. By decreasing the global tolerance, the numerical solution obtained from the TRAIL scheme are shown to converge towards exact results.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An advanced analysis method of initial orbit determination with too short arc data
NASA Astrophysics Data System (ADS)
Li, Binzhe; Fang, Li
2018-02-01
This paper studies the initial orbit determination (IOD) based on space-based angle measurement. Commonly, these space-based observations have short durations. As a result, classical initial orbit determination algorithms give poor results, such as Laplace methods and Gauss methods. In this paper, an advanced analysis method of initial orbit determination is developed for space-based observations. The admissible region and triangulation are introduced in the method. Genetic algorithm is also used for adding some constraints of parameters. Simulation results show that the algorithm can successfully complete the initial orbit determination.
Effectiveness of the Stewart Method in the Evaluation of Blood Gas Parameters.
Gezer, Mustafa; Bulucu, Fatih; Ozturk, Kadir; Kilic, Selim; Kaldirim, Umit; Eyi, Yusuf Emrah
2015-03-01
In 1981, Peter A. Stewart published a paper describing his concept for employing Strong Ion Difference. In this study we compared the HCO3 levels and Anion Gap (AG) calculated using the classic method and the Stewart method. Four hundred nine (409) arterial blood gases of 90 patients were collected retrospectively. Some were obtained from the same patients in different times and conditions. All blood samples were evaluated using the same device (ABL 800 Blood Gas Analyzer). HCO3 level and AG were calculated using the Stewart method via the website AcidBase.org. HCO3 levels, AG and strong ion difference (SID) were calculated using the Stewart method, incorporating the parameters of age, serum lactate, glucose, sodium, and pH, etc. According to classic method, the levels of HCO3 and AG were 22.4±7.2 mEq/L and 20.1±4.1 mEq/L respectively. According to Stewart method, the levels of HCO3 and AG were 22.6±7.4 and 19.9±4.5 mEq/L respectively. There was strong correlation between the classic method and the Stewart method for calculating HCO3 and AG. The Stewart method may be more effective in the evaluation of complex metabolic acidosis.
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.
Mixed QM/MM molecular electrostatic potentials.
Hernández, B; Luque, F J; Orozco, M
2000-05-01
A new method is presented for the calculation of the Molecular Electrostatic Potential (MEP) in large systems. Based on the mixed Quantum Mechanics/Molecular Mechanics (QM/MM) approach, the method assumes both a quantum and classical description for the molecule, and the calculation of the MEP in the space surrounding the molecule is made using this dual treatment. The MEP at points close to the molecule is computed using a full QM formalism, while a pure classical evaluation of the MEP is used for points located at large distances from the molecule. The algorithm allows the user to select the desired level of accuracy in the MEP, so that the definition of the regions where the MEP is computed at the classical or QM levels is adjusted automatically. The potential use of this QM/MM MEP in molecular modeling studies is discussed.
Region growing using superpixels with learned shape prior
NASA Astrophysics Data System (ADS)
Borovec, Jiří; Kybic, Jan; Sugimoto, Akihiro
2017-11-01
Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed method differs from classical region growing in three important aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speed-up. Second, our method uses learned statistical shape properties that encourage plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as an energy minimization and is solved either greedily or iteratively using graph cuts. We demonstrate the performance of the proposed method and compare it with alternative approaches on the task of segmenting individual eggs in microscopy images of Drosophila ovaries.
Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan
2018-03-01
Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Yi; Xu, Yan Long
2018-05-01
When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.
Grandchamp, Romain; Delorme, Arnaud
2011-01-01
In electroencephalography, the classical event-related potential model often proves to be a limited method to study complex brain dynamics. For this reason, spectral techniques adapted from signal processing such as event-related spectral perturbation (ERSP) – and its variant event-related synchronization and event-related desynchronization – have been used over the past 20 years. They represent average spectral changes in response to a stimulus. These spectral methods do not have strong consensus for comparing pre- and post-stimulus activity. When computing ERSP, pre-stimulus baseline removal is usually performed after averaging the spectral estimate of multiple trials. Correcting the baseline of each single-trial prior to averaging spectral estimates is an alternative baseline correction method. However, we show that this method leads to positively skewed post-stimulus ERSP values. We eventually present new single-trial-based ERSP baseline correction methods that perform trial normalization or centering prior to applying classical baseline correction methods. We show that single-trial correction methods minimize the contribution of artifactual data trials with high-amplitude spectral estimates and are robust to outliers when performing statistical inference testing. We then characterize these methods in terms of their time–frequency responses and behavior compared to classical ERSP methods. PMID:21994498
Chiang, H-S; Huang, R-Y; Weng, P-W; Mau, L-P; Tsai, Y-W C; Chung, M-P; Chung, C-H; Yeh, H-W; Shieh, Y-S; Cheng, W-C
2018-03-01
Current bibliometric analyses of the evolving trends in research scope category across different time periods using the H-classics method in implantology are considerably limited. The purpose of this study was to identify the classic articles in implantology to analyse bibliometric characteristics and associated factors in implantology for the past four decades. H-Classics in implantology were identified within four time periods between 1977 and 2016, based on the h-index from the Scopus ® database. For each article, the principal bibliometric parameters of authorship, geographic origin, country origin, and institute origin, collaboration, centralisation, article type, scope of study and other associated factors were analysed in four time periods. A significant increase in mean numbers of authors per H-Classics was found across time. Both Europe and North America were the most productive region/country and steadily dominated this field in each time period. Collaborations of author, internationally and inter-institutionally had significantly increased across time. A significant decentralisation in authorships, institutes and journals was noted in past four decades. The journal of Clinical Oral Implant Researches has raised its importance for almost 30 years (1987-2016). Research on Complications, peri-implant infection/pathology/therapy had been increasing in production throughout each period. This is the first study to evaluate research trends in implantology in the past 40 years using the H-classics method, which through analysing via principle bibliometric characteristics reflected a historical perspective on evolutionary mainstream in the field. Prominence of research regarding complications may forecast innovative advancements in future. © 2018 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Antoine, Xavier; Levitt, Antoine; Tang, Qinglin
2017-08-01
We propose a preconditioned nonlinear conjugate gradient method coupled with a spectral spatial discretization scheme for computing the ground states (GS) of rotating Bose-Einstein condensates (BEC), modeled by the Gross-Pitaevskii Equation (GPE). We first start by reviewing the classical gradient flow (also known as imaginary time (IMT)) method which considers the problem from the PDE standpoint, leading to numerically solve a dissipative equation. Based on this IMT equation, we analyze the forward Euler (FE), Crank-Nicolson (CN) and the classical backward Euler (BE) schemes for linear problems and recognize classical power iterations, allowing us to derive convergence rates. By considering the alternative point of view of minimization problems, we propose the preconditioned steepest descent (PSD) and conjugate gradient (PCG) methods for the GS computation of the GPE. We investigate the choice of the preconditioner, which plays a key role in the acceleration of the convergence process. The performance of the new algorithms is tested in 1D, 2D and 3D. We conclude that the PCG method outperforms all the previous methods, most particularly for 2D and 3D fast rotating BECs, while being simple to implement.
Improvement of Quench Factor Analysis in Phase and Hardness Prediction of a Quenched Steel
NASA Astrophysics Data System (ADS)
Kianezhad, M.; Sajjadi, S. A.
2013-05-01
The accurate prediction of alloys' properties introduced by heat treatment has been considered by many researchers. The advantages of such predictions are reduction of test trails and materials' consumption as well as time and energy saving. One of the most important methods to predict hardness in quenched steel parts is Quench Factor Analysis (QFA). Classical QFA is based on the Johnson-Mehl-Avrami-Kolmogorov (JMAK) equation. In this study, a modified form of the QFA based on the work by Rometsch et al. is compared with the classical QFA, and they are applied to prediction of hardness of steels. For this purpose, samples of CK60 steel were utilized as raw material. They were austenitized at 1103 K (830 °C). After quenching in different environments, they were cut and their hardness was determined. In addition, the hardness values of the samples were fitted using the classical and modified equations for the quench factor analysis and the results were compared. Results showed a significant improvement in fitted values of the hardness and proved the higher efficiency of the new method.
del Moral, F; Vázquez, J A; Ferrero, J J; Willisch, P; Ramírez, R D; Teijeiro, A; López Medina, A; Andrade, B; Vázquez, J; Salvador, F; Medal, D; Salgado, M; Muñoz, V
2009-09-01
Modern radiotherapy uses complex treatments that necessitate more complex quality assurance procedures. As a continuous medium, GafChromic EBT films offer suitable features for such verification. However, its sensitometric curve is not fully understood in terms of classical theoretical models. In fact, measured optical densities and those predicted by the classical models differ significantly. This difference increases systematically with wider dose ranges. Thus, achieving the accuracy required for intensity-modulated radiotherapy (IMRT) by classical methods is not possible, plecluding their use. As a result, experimental parametrizations, such as polynomial fits, are replacing phenomenological expressions in modern investigations. This article focuses on identifying new theoretical ways to describe sensitometric curves and on evaluating the quality of fit for experimental data based on four proposed models. A whole mathematical formalism starting with a geometrical version of the classical theory is used to develop new expressions for the sensitometric curves. General results from the percolation theory are also used. A flat-bed-scanner-based method was chosen for the film analysis. Different tests were performed, such as consistency of the numeric results for the proposed model and double examination using data from independent researchers. Results show that the percolation-theory-based model provides the best theoretical explanation for the sensitometric behavior of GafChromic films. The different sizes of active centers or monomer crystals of the film are the basis of this model, allowing acquisition of information about the internal structure of the films. Values for the mean size of the active centers were obtained in accordance with technical specifications. In this model, the dynamics of the interaction between the active centers of GafChromic film and radiation is also characterized by means of its interaction cross-section value. The percolation model fulfills the accuracy requirements for quality-control procedures when large ranges of doses are used and offers a physical explanation for the film response.
Harmonic oscillators and resonance series generated by a periodic unstable classical orbit
NASA Technical Reports Server (NTRS)
Kazansky, A. K.; Ostrovsky, Valentin N.
1995-01-01
The presence of an unstable periodic classical orbit allows one to introduce the decay time as a purely classical magnitude: inverse of the Lyapunov index which characterizes the orbit instability. The Uncertainty Relation gives the corresponding resonance width which is proportional to the Planck constant. The more elaborate analysis is based on the parabolic equation method where the problem is effectively reduced to the multidimensional harmonic oscillator with the time-dependent frequency. The resonances form series in the complex energy plane which is equidistant in the direction perpendicular to the real axis. The applications of the general approach to various problems in atomic physics are briefly exposed.
Georges, Patrick
2017-01-01
This paper proposes a statistical analysis that captures similarities and differences between classical music composers with the eventual aim to understand why particular composers 'sound' different even if their 'lineages' (influences network) are similar or why they 'sound' alike if their 'lineages' are different. In order to do this we use statistical methods and measures of association or similarity (based on presence/absence of traits such as specific 'ecological' characteristics and personal musical influences) that have been developed in biosystematics, scientometrics, and bibliographic coupling. This paper also represents a first step towards a more ambitious goal of developing an evolutionary model of Western classical music.
From transistor to trapped-ion computers for quantum chemistry.
Yung, M-H; Casanova, J; Mezzacapo, A; McClean, J; Lamata, L; Aspuru-Guzik, A; Solano, E
2014-01-07
Over the last few decades, quantum chemistry has progressed through the development of computational methods based on modern digital computers. However, these methods can hardly fulfill the exponentially-growing resource requirements when applied to large quantum systems. As pointed out by Feynman, this restriction is intrinsic to all computational models based on classical physics. Recently, the rapid advancement of trapped-ion technologies has opened new possibilities for quantum control and quantum simulations. Here, we present an efficient toolkit that exploits both the internal and motional degrees of freedom of trapped ions for solving problems in quantum chemistry, including molecular electronic structure, molecular dynamics, and vibronic coupling. We focus on applications that go beyond the capacity of classical computers, but may be realizable on state-of-the-art trapped-ion systems. These results allow us to envision a new paradigm of quantum chemistry that shifts from the current transistor to a near-future trapped-ion-based technology.
Gradient-based Optimization for Poroelastic and Viscoelastic MR Elastography
Tan, Likun; McGarry, Matthew D.J.; Van Houten, Elijah E.W.; Ji, Ming; Solamen, Ligin; Weaver, John B.
2017-01-01
We describe an efficient gradient computation for solving inverse problems arising in magnetic resonance elastography (MRE). The algorithm can be considered as a generalized ‘adjoint method’ based on a Lagrangian formulation. One requirement for the classic adjoint method is assurance of the self-adjoint property of the stiffness matrix in the elasticity problem. In this paper, we show this property is no longer a necessary condition in our algorithm, but the computational performance can be as efficient as the classic method, which involves only two forward solutions and is independent of the number of parameters to be estimated. The algorithm is developed and implemented in material property reconstructions using poroelastic and viscoelastic modeling. Various gradient- and Hessian-based optimization techniques have been tested on simulation, phantom and in vivo brain data. The numerical results show the feasibility and the efficiency of the proposed scheme for gradient calculation. PMID:27608454
From transistor to trapped-ion computers for quantum chemistry
Yung, M.-H.; Casanova, J.; Mezzacapo, A.; McClean, J.; Lamata, L.; Aspuru-Guzik, A.; Solano, E.
2014-01-01
Over the last few decades, quantum chemistry has progressed through the development of computational methods based on modern digital computers. However, these methods can hardly fulfill the exponentially-growing resource requirements when applied to large quantum systems. As pointed out by Feynman, this restriction is intrinsic to all computational models based on classical physics. Recently, the rapid advancement of trapped-ion technologies has opened new possibilities for quantum control and quantum simulations. Here, we present an efficient toolkit that exploits both the internal and motional degrees of freedom of trapped ions for solving problems in quantum chemistry, including molecular electronic structure, molecular dynamics, and vibronic coupling. We focus on applications that go beyond the capacity of classical computers, but may be realizable on state-of-the-art trapped-ion systems. These results allow us to envision a new paradigm of quantum chemistry that shifts from the current transistor to a near-future trapped-ion-based technology. PMID:24395054
Pinto, Joana; Silva, Vera L M; Silva, Ana M G; Silva, Artur M S
2015-06-22
A low cost, safe, clean and environmentally benign base-catalyzed cyclodehydration of appropriate β-diketones affording (E)-2-styrylchromones and flavones in good yields is disclosed. Water was used as solvent and the reactions were heated using classical and microwave heating methods, under open and closed vessel conditions. β-Diketones having electron-donating and withdrawing substituents were used to evaluate the reaction scope. The reaction products were isolated in high purity by simple filtration and recrystallization from ethanol, when using 800 mg of the starting diketone under classical reflux heating conditions.
Learning, Realizability and Games in Classical Arithmetic
NASA Astrophysics Data System (ADS)
Aschieri, Federico
2010-12-01
In this dissertation we provide mathematical evidence that the concept of learning can be used to give a new and intuitive computational semantics of classical proofs in various fragments of Predicative Arithmetic. First, we extend Kreisel modified realizability to a classical fragment of first order Arithmetic, Heyting Arithmetic plus EM1 (Excluded middle axiom restricted to Sigma^0_1 formulas). We introduce a new realizability semantics we call "Interactive Learning-Based Realizability". Our realizers are self-correcting programs, which learn from their errors and evolve through time. Secondly, we extend the class of learning based realizers to a classical version PCFclass of PCF and, then, compare the resulting notion of realizability with Coquand game semantics and prove a full soundness and completeness result. In particular, we show there is a one-to-one correspondence between realizers and recursive winning strategies in the 1-Backtracking version of Tarski games. Third, we provide a complete and fully detailed constructive analysis of learning as it arises in learning based realizability for HA+EM1, Avigad's update procedures and epsilon substitution method for Peano Arithmetic PA. We present new constructive techniques to bound the length of learning processes and we apply them to reprove - by means of our theory - the classic result of Godel that provably total functions of PA can be represented in Godel's system T. Last, we give an axiomatization of the kind of learning that is needed to computationally interpret Predicative classical second order Arithmetic. Our work is an extension of Avigad's and generalizes the concept of update procedure to the transfinite case. Transfinite update procedures have to learn values of transfinite sequences of non computable functions in order to extract witnesses from classical proofs.
The ReaxFF reactive force-field: Development, applications, and future directions
Senftle, Thomas; Hong, Sungwook; Islam, Md Mahbubul; ...
2016-03-04
The reactive force-field (ReaxFF) interatomic potential is a powerful computational tool for exploring, developing and optimizing material properties. Methods based on the principles of quantum mechanics (QM), while offering valuable theoretical guidance at the electronic level, are often too computationally intense for simulations that consider the full dynamic evolution of a system. Alternatively, empirical interatomic potentials that are based on classical principles require significantly fewer computational resources, which enables simulations to better describe dynamic processes over longer timeframes and on larger scales. Such methods, however, typically require a predefined connectivity between atoms, precluding simulations that involve reactive events. The ReaxFFmore » method was developed to help bridge this gap. Approaching the gap from the classical side, ReaxFF casts the empirical interatomic potential within a bond-order formalism, thus implicitly describing chemical bonding without expensive QM calculations. As a result, this article provides an overview of the development, application, and future directions of the ReaxFF method.« less
Yang, James J; Li, Jia; Williams, L Keoki; Buu, Anne
2016-01-05
In genome-wide association studies (GWAS) for complex diseases, the association between a SNP and each phenotype is usually weak. Combining multiple related phenotypic traits can increase the power of gene search and thus is a practically important area that requires methodology work. This study provides a comprehensive review of existing methods for conducting GWAS on complex diseases with multiple phenotypes including the multivariate analysis of variance (MANOVA), the principal component analysis (PCA), the generalizing estimating equations (GEE), the trait-based association test involving the extended Simes procedure (TATES), and the classical Fisher combination test. We propose a new method that relaxes the unrealistic independence assumption of the classical Fisher combination test and is computationally efficient. To demonstrate applications of the proposed method, we also present the results of statistical analysis on the Study of Addiction: Genetics and Environment (SAGE) data. Our simulation study shows that the proposed method has higher power than existing methods while controlling for the type I error rate. The GEE and the classical Fisher combination test, on the other hand, do not control the type I error rate and thus are not recommended. In general, the power of the competing methods decreases as the correlation between phenotypes increases. All the methods tend to have lower power when the multivariate phenotypes come from long tailed distributions. The real data analysis also demonstrates that the proposed method allows us to compare the marginal results with the multivariate results and specify which SNPs are specific to a particular phenotype or contribute to the common construct. The proposed method outperforms existing methods in most settings and also has great applications in GWAS on complex diseases with multiple phenotypes such as the substance abuse disorders.
A technology mapping based on graph of excitations and outputs for finite state machines
NASA Astrophysics Data System (ADS)
Kania, Dariusz; Kulisz, Józef
2017-11-01
A new, efficient technology mapping method of FSMs, dedicated for PAL-based PLDs is proposed. The essence of the method consists in searching for the minimal set of PAL-based logic blocks that cover a set of multiple-output implicants describing the transition and output functions of an FSM. The method is based on a new concept of graph: the Graph of Excitations and Outputs. The proposed algorithm was tested using the FSM benchmarks. The obtained results were compared with the classical technology mapping of FSM.
Robust Stability Analysis of the Space Launch System Control Design: A Singular Value Approach
NASA Technical Reports Server (NTRS)
Pei, Jing; Newsome, Jerry R.
2015-01-01
Classical stability analysis consists of breaking the feedback loops one at a time and determining separately how much gain or phase variations would destabilize the stable nominal feedback system. For typical launch vehicle control design, classical control techniques are generally employed. In addition to stability margins, frequency domain Monte Carlo methods are used to evaluate the robustness of the design. However, such techniques were developed for Single-Input-Single-Output (SISO) systems and do not take into consideration the off-diagonal terms in the transfer function matrix of Multi-Input-Multi-Output (MIMO) systems. Robust stability analysis techniques such as H(sub infinity) and mu are applicable to MIMO systems but have not been adopted as standard practices within the launch vehicle controls community. This paper took advantage of a simple singular-value-based MIMO stability margin evaluation method based on work done by Mukhopadhyay and Newsom and applied it to the SLS high-fidelity dynamics model. The method computes a simultaneous multi-loop gain and phase margin that could be related back to classical margins. The results presented in this paper suggest that for the SLS system, traditional SISO stability margins are similar to the MIMO margins. This additional level of verification provides confidence in the robustness of the control design.
Recommender engine for continuous-time quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
Discriminative components of data.
Peltonen, Jaakko; Kaski, Samuel
2005-01-01
A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.
NASA Astrophysics Data System (ADS)
Abdel-Halim, Lamia M.; Abd-El Rahman, Mohamed K.; Ramadan, Nesrin K.; EL Sanabary, Hoda F. A.; Salem, Maissa Y.
2016-04-01
A comparative study was developed between two classical spectrophotometric methods (dual wavelength method and Vierordt's method) and two recent methods manipulating ratio spectra (ratio difference method and first derivative of ratio spectra method) for simultaneous determination of Antazoline hydrochloride (AN) and Tetryzoline hydrochloride (TZ) in their combined pharmaceutical formulation and in the presence of benzalkonium chloride as a preservative without preliminary separation. The dual wavelength method depends on choosing two wavelengths for each drug in a way so that the difference in absorbance at those two wavelengths is zero for the other drug. While Vierordt's method, is based upon measuring the absorbance and the absorptivity values of the two drugs at their λmax (248.0 and 219.0 nm for AN and TZ, respectively), followed by substitution in the corresponding Vierordt's equation. Recent methods manipulating ratio spectra depend on either measuring the difference in amplitudes of ratio spectra between 255.5 and 269.5 nm for AN and 220.0 and 273.0 nm for TZ in case of ratio difference method or computing first derivative of the ratio spectra for each drug then measuring the peak amplitude at 250.0 nm for AN and at 224.0 nm for TZ in case of first derivative of ratio spectrophotometry. The specificity of the developed methods was investigated by analyzing different laboratory prepared mixtures of the two drugs. All methods were applied successfully for the determination of the selected drugs in their combined dosage form proving that the classical spectrophotometric methods can still be used successfully in analysis of binary mixture using minimal data manipulation rather than recent methods which require relatively more steps. Furthermore, validation of the proposed methods was performed according to ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Statistical studies showed that the methods can be competitively applied in quality control laboratories.
Wu, Meiping; Cao, Juliang; Zhang, Kaidong; Cai, Shaokun; Yu, Ruihang
2018-01-01
Quality assessment is an important part in the strapdown airborne gravimetry. Root mean square error (RMSE) evaluation method is a classical way to evaluate the gravimetry quality, but classical evaluation methods are preconditioned by extra flight or reference data. Thus, a method, which is able to largely conquer the premises of classical quality assessment methods and can be used in single survey line, has been developed in this paper. According to theoretical analysis, the method chooses the stability of two horizontal attitude angles, horizontal specific force and vertical specific force as the determinants of quality assessment method. The actual data, collected by SGA-WZ02 from 13 flights 21 lines in certain survey, was used to build the model and elaborate the method. To substantiate the performance of the quality assessment model, the model is applied in extra repeat line flights from two surveys. Compared with internal RMSE, standard deviation of assessment residuals are 0.23 mGal and 0.16 mGal in two surveys, which shows that the quality assessment method is reliable and stricter. The extra flights are not necessary by specially arranging the route of flights. The method, summarized from SGA-WZ02, is a feasible approach to assess gravimetry quality using single line data and is also suitable for other strapdown gravimeters. PMID:29373535
NASA Astrophysics Data System (ADS)
Wang, Dong; Zhao, Yang; Yang, Fangfang; Tsui, Kwok-Leung
2017-09-01
Brownian motion with adaptive drift has attracted much attention in prognostics because its first hitting time is highly relevant to remaining useful life prediction and it follows the inverse Gaussian distribution. Besides linear degradation modeling, nonlinear-drifted Brownian motion has been developed to model nonlinear degradation. Moreover, the first hitting time distribution of the nonlinear-drifted Brownian motion has been approximated by time-space transformation. In the previous studies, the drift coefficient is the only hidden state used in state space modeling of the nonlinear-drifted Brownian motion. Besides the drift coefficient, parameters of a nonlinear function used in the nonlinear-drifted Brownian motion should be treated as additional hidden states of state space modeling to make the nonlinear-drifted Brownian motion more flexible. In this paper, a prognostic method based on nonlinear-drifted Brownian motion with multiple hidden states is proposed and then it is applied to predict remaining useful life of rechargeable batteries. 26 sets of rechargeable battery degradation samples are analyzed to validate the effectiveness of the proposed prognostic method. Moreover, some comparisons with a standard particle filter based prognostic method, a spherical cubature particle filter based prognostic method and two classic Bayesian prognostic methods are conducted to highlight the superiority of the proposed prognostic method. Results show that the proposed prognostic method has lower average prediction errors than the particle filter based prognostic methods and the classic Bayesian prognostic methods for battery remaining useful life prediction.
Darwish, Hany W; Bakheit, Ahmed H; Abdelhameed, Ali S
2016-03-01
Simultaneous spectrophotometric analysis of a multi-component dosage form of olmesartan, amlodipine and hydrochlorothiazide used for the treatment of hypertension has been carried out using various chemometric methods. Multivariate calibration methods include classical least squares (CLS) executed by net analyte processing (NAP-CLS), orthogonal signal correction (OSC-CLS) and direct orthogonal signal correction (DOSC-CLS) in addition to multivariate curve resolution-alternating least squares (MCR-ALS). Results demonstrated the efficiency of the proposed methods as quantitative tools of analysis as well as their qualitative capability. The three analytes were determined precisely using the aforementioned methods in an external data set and in a dosage form after optimization of experimental conditions. Finally, the efficiency of the models was validated via comparison with the partial least squares (PLS) method in terms of accuracy and precision.
Teaching Biochemistry at a Medical Faculty with a Problem-Based Learning System.
ERIC Educational Resources Information Center
Rosing, Jan
1997-01-01
Highlights the differences between classical teaching methods and problem-based learning. Describes the curriculum and problem-based approach of the Faculty of Medicine at the Maastricht University and gives an overview of the implementation of biochemistry in the medical curriculum. Discusses the procedure for student assessment and presents…
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.
Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang
2015-09-21
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zhong-Li, E-mail: zl.liu@163.com; Zhang, Xiu-Lu; Cai, Ling-Cang
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curvemore » of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.« less
Re'class'ification of 'quant'ified classical simulated annealing
NASA Astrophysics Data System (ADS)
Tanaka, Toshiyuki
2009-12-01
We discuss a classical reinterpretation of quantum-mechanics-based analysis of classical Markov chains with detailed balance, that is based on the quantum-classical correspondence. The classical reinterpretation is then used to demonstrate that it successfully reproduces a sufficient condition for cooling schedule in classical simulated annealing, which has the inverse-logarithmic scaling.
Quantum games of opinion formation based on the Marinatto-Weber quantum game scheme
NASA Astrophysics Data System (ADS)
Deng, Xinyang; Deng, Yong; Liu, Qi; Shi, Lei; Wang, Zhen
2016-06-01
Quantization has become a new way to investigate classical game theory since quantum strategies and quantum games were proposed. In the existing studies, many typical game models, such as the prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been extensively explored by using quantization approach. Along a similar method, here several game models of opinion formations will be quantized on the basis of the Marinatto-Weber quantum game scheme, a frequently used scheme of converting classical games to quantum versions. Our results show that the quantization can fascinatingly change the properties of some classical opinion formation game models so as to generate win-win outcomes.
NASA Astrophysics Data System (ADS)
Shiau, Lie-Ding
2016-09-01
The pre-exponential factor and interfacial energy obtained from the metastable zone width (MSZW) data using the integral method proposed by Shiau and Lu [1] are compared in this study with those obtained from the induction time data using the conventional method (ti ∝J-1) for three crystallization systems, including potassium sulfate in water in a 200 mL vessel, borax decahydrate in water in a 100 mL vessel and butyl paraben in ethanol in a 5 mL tube. The results indicate that the pre-exponential factor and interfacial energy calculated from the induction time data based on classical nucleation theory are consistent with those calculated from the MSZW data using the same detection technique for the studied systems.
An adaptive finite element method for the inequality-constrained Reynolds equation
NASA Astrophysics Data System (ADS)
Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha
2018-07-01
We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.
Classical Wigner method with an effective quantum force: application to reaction rates.
Poulsen, Jens Aage; Li, Huaqing; Nyman, Gunnar
2009-07-14
We construct an effective "quantum force" to be used in the classical molecular dynamics part of the classical Wigner method when determining correlation functions. The quantum force is obtained by estimating the most important short time separation of the Feynman paths that enter into the expression for the correlation function. The evaluation of the force is then as easy as classical potential energy evaluations. The ideas are tested on three reaction rate problems. The resulting transmission coefficients are in much better agreement with accurate results than transmission coefficients from the ordinary classical Wigner method.
NASA Astrophysics Data System (ADS)
Wang, Shu; Chen, Xiaodian; de Grijs, Richard; Deng, Licai
2018-01-01
Classical Cepheids are well-known and widely used distance indicators. As distance and extinction are usually degenerate, it is important to develop suitable methods to robustly anchor the distance scale. Here, we introduce a near-infrared optimal distance method to determine both the extinction values of and distances to a large sample of 288 Galactic classical Cepheids. The overall uncertainty in the derived distances is less than 4.9%. We compare our newly determined distances to the Cepheids in our sample with previously published distances to the same Cepheids with Hubble Space Telescope parallax measurements and distances based on the IR surface brightness method, Wesenheit functions, and the main-sequence fitting method. The systematic deviations in the distances determined here with respect to those of previous publications is less than 1%–2%. Hence, we constructed Galactic mid-IR period–luminosity (PL) relations for classical Cepheids in the four Wide-Field Infrared Survey Explorer (WISE) bands (W1, W2, W3, and W4) and the four Spitzer Space Telescope bands ([3.6], [4.5], [5.8], and [8.0]). Based on our sample of hundreds of Cepheids, the WISE PL relations have been determined for the first time; their dispersion is approximately 0.10 mag. Using the currently most complete sample, our Spitzer PL relations represent a significant improvement in accuracy, especially in the [3.6] band which has the smallest dispersion (0.066 mag). In addition, the average mid-IR extinction curve for Cepheids has been obtained: {A}W1/{A}{K{{s}}}≈ 0.560, {A}W2/{A}{K{{s}}}≈ 0.479, {A}W3/{A}{K{{s}}}≈ 0.507, {A}W4/{A}{K{{s}}}≈ 0.406, {A}[3.6]/{A}{K{{s}}}≈ 0.481, {A}[4.5]/{A}{K{{s}}}≈ 0.469, {A}[5.8]/{A}{K{{s}}}≈ 0.427, and {A}[8.0]/{A}{K{{s}}}≈ 0.427 {mag}.
Extreme value analysis in biometrics.
Hüsler, Jürg
2009-04-01
We review some approaches of extreme value analysis in the context of biometrical applications. The classical extreme value analysis is based on iid random variables. Two different general methods are applied, which will be discussed together with biometrical examples. Different estimation, testing, goodness-of-fit procedures for applications are discussed. Furthermore, some non-classical situations are considered where the data are possibly dependent, where a non-stationary behavior is observed in the data or where the observations are not univariate. A few open problems are also stated.
Simple proof of the quantum benchmark fidelity for continuous-variable quantum devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Namiki, Ryo
2011-04-15
An experimental success criterion for continuous-variable quantum teleportation and memory is to surpass the limit of the average fidelity achieved by classical measure-and-prepare schemes with respect to a Gaussian-distributed set of coherent states. We present an alternative proof of the classical limit based on the familiar notions of state-channel duality and partial transposition. The present method enables us to produce a quantum-domain criterion associated with a given set of measured fidelities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stover, Tracy E.; Baker, James S.; Ratliff, Michael D.
The classic Limiting Surface Density (LSD) method is an empirical calculation technique for analyzing and setting mass limits for fissile items in storage arrays. LSD is a desirable method because it can reduce or eliminate the need for lengthy detailed Monte Carlo models of storage arrays. The original (or classic) method was developed based on idealized arrays of bare spherical metal items in air-spaced cubic units in a water-reflected cubic array. In this case, the geometric and material-based surface densities were acceptably correlated by linear functions. Later updates to the method were made to allow for concrete reflection rather thanmore » water, cylindrical masses rather than spheres, different material forms, and noncubic arrays. However, in the intervening four decades since those updates, little work has been done to update the method, especially for use with contemporary highly heterogeneous shipping packages that are noncubic and stored in noncubic arrays. In this work, the LSD method is reevaluated for application to highly heterogeneous shipping packages for fissile material. The package modeled is the 9975 shipping package, currently the primary package used to store fissile material at Savannah River Site’s K-Area Complex. The package is neither cubic nor rectangular but resembles nested cylinders of stainless steel, lead, aluminum, and Celotex. The fissile content is assumed to be a cylinder of plutonium metal. The packages may be arranged in arrays with both an equal number of packages per side (package cubic) and an unequal number of packages per side (noncubic). The cubic arrangements are used to derive the 9975-specific material and geometry constants for the classic linear form LSD method. The linear form of the LSD, with noncubic array adjustment, is applied and evaluated against computational models for these packages to determine the critical unit fissile mass. Sensitivity equations are derived from the classic method, and these are also used to make projections of the critical unit fissile mass. It was discovered that the heterogeneous packages have a nonlinear surface density versus critical mass relationship compared to the acceptably linear response of bare spherical fissile masses. Methodology is developed to address the nonlinear response. In so doing, the solution to the nonlinear LSD method becomes decoupled from the critical mass of a single unit, adding to its flexibility. The ability of the method to predict changes in neutron multiplication due to perturbations in a parameter is examined to provide a basis for analyzing upset conditions. In conclusion, a full rederivation of the classic LSD method from diffusion theory is also included as this was found to be lacking in the available literature.« less
Stover, Tracy E.; Baker, James S.; Ratliff, Michael D.; ...
2018-03-02
The classic Limiting Surface Density (LSD) method is an empirical calculation technique for analyzing and setting mass limits for fissile items in storage arrays. LSD is a desirable method because it can reduce or eliminate the need for lengthy detailed Monte Carlo models of storage arrays. The original (or classic) method was developed based on idealized arrays of bare spherical metal items in air-spaced cubic units in a water-reflected cubic array. In this case, the geometric and material-based surface densities were acceptably correlated by linear functions. Later updates to the method were made to allow for concrete reflection rather thanmore » water, cylindrical masses rather than spheres, different material forms, and noncubic arrays. However, in the intervening four decades since those updates, little work has been done to update the method, especially for use with contemporary highly heterogeneous shipping packages that are noncubic and stored in noncubic arrays. In this work, the LSD method is reevaluated for application to highly heterogeneous shipping packages for fissile material. The package modeled is the 9975 shipping package, currently the primary package used to store fissile material at Savannah River Site’s K-Area Complex. The package is neither cubic nor rectangular but resembles nested cylinders of stainless steel, lead, aluminum, and Celotex. The fissile content is assumed to be a cylinder of plutonium metal. The packages may be arranged in arrays with both an equal number of packages per side (package cubic) and an unequal number of packages per side (noncubic). The cubic arrangements are used to derive the 9975-specific material and geometry constants for the classic linear form LSD method. The linear form of the LSD, with noncubic array adjustment, is applied and evaluated against computational models for these packages to determine the critical unit fissile mass. Sensitivity equations are derived from the classic method, and these are also used to make projections of the critical unit fissile mass. It was discovered that the heterogeneous packages have a nonlinear surface density versus critical mass relationship compared to the acceptably linear response of bare spherical fissile masses. Methodology is developed to address the nonlinear response. In so doing, the solution to the nonlinear LSD method becomes decoupled from the critical mass of a single unit, adding to its flexibility. The ability of the method to predict changes in neutron multiplication due to perturbations in a parameter is examined to provide a basis for analyzing upset conditions. In conclusion, a full rederivation of the classic LSD method from diffusion theory is also included as this was found to be lacking in the available literature.« less
Learning-based computing techniques in geoid modeling for precise height transformation
NASA Astrophysics Data System (ADS)
Erol, B.; Erol, S.
2013-03-01
Precise determination of local geoid is of particular importance for establishing height control in geodetic GNSS applications, since the classical leveling technique is too laborious. A geoid model can be accurately obtained employing properly distributed benchmarks having GNSS and leveling observations using an appropriate computing algorithm. Besides the classical multivariable polynomial regression equations (MPRE), this study attempts an evaluation of learning based computing algorithms: artificial neural networks (ANNs), adaptive network-based fuzzy inference system (ANFIS) and especially the wavelet neural networks (WNNs) approach in geoid surface approximation. These algorithms were developed parallel to advances in computer technologies and recently have been used for solving complex nonlinear problems of many applications. However, they are rather new in dealing with precise modeling problem of the Earth gravity field. In the scope of the study, these methods were applied to Istanbul GPS Triangulation Network data. The performances of the methods were assessed considering the validation results of the geoid models at the observation points. In conclusion the ANFIS and WNN revealed higher prediction accuracies compared to ANN and MPRE methods. Beside the prediction capabilities, these methods were also compared and discussed from the practical point of view in conclusions.
A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods
2014-08-01
Approved for public release; distribution is unlimited. A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods...ABSTRACT A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods Report Title This experiment tests whether a virtual... PEDAGOGICAL EFFECTIVENESS OF VIRTUAL WORLDS AND OF TRADITIONAL TRAINING METHODS A Thesis by BENJAMIN PETERS
van Belkum, Alex; Halimi, Diane; Bonetti, Eve-Julie; Renzi, Gesuele; Cherkaoui, Abdessalam; Sauvonnet, Véronique; Martelin, Roland; Durand, Géraldine; Chatellier, Sonia; Zambardi, Gilles; Engelhardt, Anette; Karlsson, Åsa; Schrenzel, Jacques
2015-01-01
Precise assessment of potential therapeutic synergy, antagonism or indifference between antimicrobial agents currently depends on time-consuming and hard-to-standardize in vitro chequerboard titration methods. We here present a method based on a novel two-dimensional antibiotic gradient technique named Xact™. We used a test comprising a combination of perpendicular gradients of meropenem and colistin in a single quadrant. We compared test outcomes with those obtained with classical chequerboard microbroth dilution testing in a study involving 27 unique strains of multidrug-resistant Acinetobacter baumannii from diverse origins. We were able to demonstrate 92% concordance between the new technology and classical chequerboard titration using the A. baumannii collection. Two strains could not be analysed by Xact™ due to their out-of-range MIC of meropenem (>128 mg/L). The new test was shown to be diagnostically useful, easy to implement and less labour intensive than the classical method. © The Author 2014. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Skiera, Christina; Steliopoulos, Panagiotis; Kuballa, Thomas; Diehl, Bernd; Holzgrabe, Ulrike
2014-05-01
Indices like acid value, peroxide value, and saponification value play an important role in quality control and identification of lipids. Requirements on these parameters are given by the monographs of the European pharmacopeia. (1)H NMR spectroscopy provides a fast and simple alternative to these classical approaches. In the present work a new (1)H NMR approach to determine the acid value is described. The method was validated using a statistical approach based on a variance components model. The performance under repeatability and in-house reproducibility conditions was assessed. We applied this (1)H NMR assay to a wide range of different fatty oils. A total of 305 oil and fat samples were examined by both the classical and the NMR method. Except for hard fat, the data obtained by the two methods were in good agreement. The (1)H NMR method was adapted to analyse waxes and oleyloleat. Furthermore, the effect of solvent and in the case of castor oil the effect of the oil matrix on line broadening and chemical shift of the carboxyl group signal are discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Samat, N. A.; Ma'arof, S. H. Mohd Imam
2015-05-01
Disease mapping is a method to display the geographical distribution of disease occurrence, which generally involves the usage and interpretation of a map to show the incidence of certain diseases. Relative risk (RR) estimation is one of the most important issues in disease mapping. This paper begins by providing a brief overview of Chikungunya disease. This is followed by a review of the classical model used in disease mapping, based on the standardized morbidity ratio (SMR), which we then apply to our Chikungunya data. We then fit an extension of the classical model, which we refer to as a Poisson-Gamma model, when prior distributions for the relative risks are assumed known. Both results are displayed and compared using maps and we reveal a smoother map with fewer extremes values of estimated relative risk. The extensions of this paper will consider other methods that are relevant to overcome the drawbacks of the existing methods, in order to inform and direct government strategy for monitoring and controlling Chikungunya disease.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash
2003-01-01
Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; von Davier, Alina A.
2008-01-01
The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…
Ozdemir, Durmus; Dinc, Erdal
2004-07-01
Simultaneous determination of binary mixtures pyridoxine hydrochloride and thiamine hydrochloride in a vitamin combination using UV-visible spectrophotometry and classical least squares (CLS) and three newly developed genetic algorithm (GA) based multivariate calibration methods was demonstrated. The three genetic multivariate calibration methods are Genetic Classical Least Squares (GCLS), Genetic Inverse Least Squares (GILS) and Genetic Regression (GR). The sample data set contains the UV-visible spectra of 30 synthetic mixtures (8 to 40 microg/ml) of these vitamins and 10 tablets containing 250 mg from each vitamin. The spectra cover the range from 200 to 330 nm in 0.1 nm intervals. Several calibration models were built with the four methods for the two components. Overall, the standard error of calibration (SEC) and the standard error of prediction (SEP) for the synthetic data were in the range of <0.01 and 0.43 microg/ml for all the four methods. The SEP values for the tablets were in the range of 2.91 and 11.51 mg/tablets. A comparison of genetic algorithm selected wavelengths for each component using GR method was also included.
NASA Astrophysics Data System (ADS)
Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl
2018-06-01
In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.
Elementary test for nonclassicality based on measurements of position and momentum
NASA Astrophysics Data System (ADS)
Fresta, Luca; Borregaard, Johannes; Sørensen, Anders S.
2015-12-01
We generalize a nonclassicality test described by Kot et al. [Phys. Rev. Lett. 108, 233601 (2012), 10.1103/PhysRevLett.108.233601], which can be used to rule out any classical description of a physical system. The test is based on measurements of quadrature operators and works by proving a contradiction with the classical description in terms of a probability distribution in phase space. As opposed to the previous work, we generalize the test to include states without rotational symmetry in phase space. Furthermore, we compare the performance of the nonclassicality test with classical tomography methods based on the inverse Radon transform, which can also be used to establish the quantum nature of a physical system. In particular, we consider a nonclassicality test based on the so-called filtered back-projection formula. We show that the general nonclassicality test is conceptually simpler, requires less assumptions on the system, and is statistically more reliable than the tests based on the filtered back-projection formula. As a specific example, we derive the optimal test for quadrature squeezed single-photon states and show that the efficiency of the test does not change with the degree of squeezing.
Overuse Injuries in Professional Ballet
Sobrino, Francisco José; de la Cuadra, Crótida; Guillén, Pedro
2015-01-01
Background Despite overuse injuries being previously described as the most frequent in ballet, there are no studies on professional dancers providing the specific clinical diagnoses or type of injury based on the discipline. Hypothesis Overuse injuries are the most frequent injuries in ballet, with differences in the type and frequency of injuries based on discipline. Study Design Cross-sectional study; Level of evidence, 3. Methods This was a descriptive cross-sectional study performed between January 1, 2005, and October 10, 2010, on injuries occurring in professional dancers from leading Spanish dance companies who practiced disciplines such as classical, neoclassical, contemporary, and Spanish ballet. Data, including type of injury, were obtained from specialized medical services at the Trauma Service, Fremap, Madrid, Spain. Results A total of 486 injuries were evaluated, a significant number of which were overuse disorders (P < .0001), especially in the most technically demanding discipline of classical ballet (82.60%). Injuries were more frequent among female dancers (75.90%) and classical ballet (83.60%). A statistically significant prevalence of patellofemoral pain syndrome was found in the classical discipline (P = .007). Injuries of the adductor muscles of the thigh (P = .001) and of the low back facet (P = .02) in the Spanish ballet discipline and lateral snapping hip (P = .02) in classical and Spanish ballet disciplines were significant. Conclusion Overuse injuries were the most frequent injuries among the professional dancers included in this study. The prevalence of injuries was greater for the most technically demanding discipline (classical ballet) as well as for women. Patellofemoral pain syndrome was the most prevalent overuse injury, followed by Achilles tendinopathy, patellar tendinopathy, and mechanical low back pain. Clinical Relevance Specific clinical diagnoses and injury-based differences between the disciplines are a key factor in ballet. PMID:26665100
NASA Astrophysics Data System (ADS)
Glushak, P. A.; Markiv, B. B.; Tokarchuk, M. V.
2018-01-01
We present a generalization of Zubarev's nonequilibrium statistical operator method based on the principle of maximum Renyi entropy. In the framework of this approach, we obtain transport equations for the basic set of parameters of the reduced description of nonequilibrium processes in a classical system of interacting particles using Liouville equations with fractional derivatives. For a classical systems of particles in a medium with a fractal structure, we obtain a non-Markovian diffusion equation with fractional spatial derivatives. For a concrete model of the frequency dependence of a memory function, we obtain generalized Kettano-type diffusion equation with the spatial and temporal fractality taken into account. We present a generalization of nonequilibrium thermofield dynamics in Zubarev's nonequilibrium statistical operator method in the framework of Renyi statistics.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1992-01-01
The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.
NASA Astrophysics Data System (ADS)
Garambois, Pierre; Besset, Sebastien; Jézéquel, Louis
2015-07-01
This paper presents a methodology for the multi-objective (MO) shape optimization of plate structure under stress criteria, based on a mixed Finite Element Model (FEM) enhanced with a sub-structuring method. The optimization is performed with a classical Genetic Algorithm (GA) method based on Pareto-optimal solutions and considers thickness distributions parameters and antagonist objectives among them stress criteria. We implement a displacement-stress Dynamic Mixed FEM (DM-FEM) for plate structure vibrations analysis. Such a model gives a privileged access to the stress within the plate structure compared to primal classical FEM, and features a linear dependence to the thickness parameters. A sub-structuring reduction method is also computed in order to reduce the size of the mixed FEM and split the given structure into smaller ones with their own thickness parameters. Those methods combined enable a fast and stress-wise efficient structure analysis, and improve the performance of the repetitive GA. A few cases of minimizing the mass and the maximum Von Mises stress within a plate structure under a dynamic load put forward the relevance of our method with promising results. It is able to satisfy multiple damage criteria with different thickness distributions, and use a smaller FEM.
NASA Astrophysics Data System (ADS)
Luk, B. L.; Liu, K. P.; Tong, F.; Man, K. F.
2010-05-01
The impact-acoustics method utilizes different information contained in the acoustic signals generated by tapping a structure with a small metal object. It offers a convenient and cost-efficient way to inspect the tile-wall bonding integrity. However, the existence of the surface irregularities will cause abnormal multiple bounces in the practical inspection implementations. The spectral characteristics from those bounces can easily be confused with the signals obtained from different bonding qualities. As a result, it will deteriorate the classic feature-based classification methods based on frequency domain. Another crucial difficulty posed by the implementation is the additive noise existing in the practical environments that may also cause feature mismatch and false judgment. In order to solve this problem, the work described in this paper aims to develop a robust inspection method that applies model-based strategy, and utilizes the wavelet domain features with hidden Markov modeling. It derives a bonding integrity recognition approach with enhanced immunity to surface roughness as well as the environmental noise. With the help of the specially designed artificial sample slabs, experiments have been carried out with impact acoustic signals contaminated by real environmental noises acquired under practical inspection background. The results are compared with those using classic method to demonstrate the effectiveness of the proposed method.
Yu, Weixuan; Neckles, Carla; Chang, Andrew; Bommineni, Gopal Reddy; Spagnuolo, Lauren; Zhang, Zhuo; Liu, Nina; Lai, Christina; Truglio, James; Tonge, Peter J.
2015-01-01
The classical methods for quantifying drug-target residence time (tR) use loss or regain of enzyme activity in progress curve kinetic assays. However, such methods become imprecise at very long residence times, mitigating the use of alternative strategies. Using the NAD(P)H-dependent FabI enoyl-ACP reductase as a model system, we developed a Penefsky column-based method for direct measurement of tR, where the off-rate of the drug was determined with radiolabeled [adenylate-32P] NAD(P+) cofactor. Twenty-three FabI inhibitors were analyzed and a mathematical model was used to estimate limits to the tR values of each inhibitor based on percent drug-target complex recovery following gel filtration. In general, this method showed good agreement with the classical steady state kinetic methods for compounds with tR values of 10-100 min. In addition, we were able to identify seven long tR inhibitors (100-1500 min) and to accurately determine their tR values. The method was then used to measure tR as a function of temperature, an analysis not previously possible using the standard kinetic approach due to decreased NAD(P)H stability at elevated temperatures. In general, a 4-fold difference in tR was observed when the temperature was increased from 25 °C to 37 °C . PMID:25684450
Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S
2008-05-09
Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.
Fish genome manipulation and directional breeding.
Ye, Ding; Zhu, ZuoYan; Sun, YongHua
2015-02-01
Aquaculture is one of the fastest developing agricultural industries worldwide. One of the most important factors for sustainable aquaculture is the development of high performing culture strains. Genome manipulation offers a powerful method to achieve rapid and directional breeding in fish. We review the history of fish breeding methods based on classical genome manipulation, including polyploidy breeding and nuclear transfer. Then, we discuss the advances and applications of fish directional breeding based on transgenic technology and recently developed genome editing technologies. These methods offer increased efficiency, precision and predictability in genetic improvement over traditional methods.
New insights into faster computation of uncertainties
NASA Astrophysics Data System (ADS)
Bhattacharya, Atreyee
2012-11-01
Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.
An entropy method for induced drag minimization
NASA Technical Reports Server (NTRS)
Greene, George C.
1989-01-01
A fundamentally new approach to the aircraft minimum induced drag problem is presented. The method, a 'viscous lifting line', is based on the minimum entropy production principle and does not require the planar wake assumption. An approximate, closed form solution is obtained for several wing configurations including a comparison of wing extension, winglets, and in-plane wing sweep, with and without a constraint on wing-root bending moment. Like the classical lifting-line theory, this theory predicts that induced drag is proportional to the square of the lift coefficient and inversely proportioinal to the wing aspect ratio. Unlike the classical theory, it predicts that induced drag is Reynolds number dependent and that the optimum spanwise circulation distribution is non-elliptic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofeng, E-mail: xfyang@math.sc.edu; Han, Daozhi, E-mail: djhan@iu.edu
2017-02-01
In this paper, we develop a series of linear, unconditionally energy stable numerical schemes for solving the classical phase field crystal model. The temporal discretizations are based on the first order Euler method, the second order backward differentiation formulas (BDF2) and the second order Crank–Nicolson method, respectively. The schemes lead to linear elliptic equations to be solved at each time step, and the induced linear systems are symmetric positive definite. We prove that all three schemes are unconditionally energy stable rigorously. Various classical numerical experiments in 2D and 3D are performed to validate the accuracy and efficiency of the proposedmore » schemes.« less
Computational Insights into Materials and Interfaces for Capacitive Energy Storage
Zhan, Cheng; Lian, Cheng; Zhang, Yu; Thompson, Matthew W.; Xie, Yu; Wu, Jianzhong; Kent, Paul R. C.; Cummings, Peter T.; Wesolowski, David J.
2017-01-01
Supercapacitors such as electric double‐layer capacitors (EDLCs) and pseudocapacitors are becoming increasingly important in the field of electrical energy storage. Theoretical study of energy storage in EDLCs focuses on solving for the electric double‐layer structure in different electrode geometries and electrolyte components, which can be achieved by molecular simulations such as classical molecular dynamics (MD), classical density functional theory (classical DFT), and Monte‐Carlo (MC) methods. In recent years, combining first‐principles and classical simulations to investigate the carbon‐based EDLCs has shed light on the importance of quantum capacitance in graphene‐like 2D systems. More recently, the development of joint density functional theory (JDFT) enables self‐consistent electronic‐structure calculation for an electrode being solvated by an electrolyte. In contrast with the large amount of theoretical and computational effort on EDLCs, theoretical understanding of pseudocapacitance is very limited. In this review, we first introduce popular modeling methods and then focus on several important aspects of EDLCs including nanoconfinement, quantum capacitance, dielectric screening, and novel 2D electrode design; we also briefly touch upon pseudocapactive mechanism in RuO2. We summarize and conclude with an outlook for the future of materials simulation and design for capacitive energy storage. PMID:28725531
Derivative free Davidon-Fletcher-Powell (DFP) for solving symmetric systems of nonlinear equations
NASA Astrophysics Data System (ADS)
Mamat, M.; Dauda, M. K.; Mohamed, M. A. bin; Waziri, M. Y.; Mohamad, F. S.; Abdullah, H.
2018-03-01
Research from the work of engineers, economist, modelling, industry, computing, and scientist are mostly nonlinear equations in nature. Numerical solution to such systems is widely applied in those areas of mathematics. Over the years, there has been significant theoretical study to develop methods for solving such systems, despite these efforts, unfortunately the methods developed do have deficiency. In a contribution to solve systems of the form F(x) = 0, x ∈ Rn , a derivative free method via the classical Davidon-Fletcher-Powell (DFP) update is presented. This is achieved by simply approximating the inverse Hessian matrix with {Q}k+1-1 to θkI. The modified method satisfied the descent condition and possess local superlinear convergence properties. Interestingly, without computing any derivative, the proposed method never fail to converge throughout the numerical experiments. The output is based on number of iterations and CPU time, different initial starting points were used on a solve 40 benchmark test problems. With the aid of the squared norm merit function and derivative-free line search technique, the approach yield a method of solving symmetric systems of nonlinear equations that is capable of significantly reducing the CPU time and number of iteration, as compared to its counterparts. A comparison between the proposed method and classical DFP update were made and found that the proposed methodis the top performer and outperformed the existing method in almost all the cases. In terms of number of iterations, out of the 40 problems solved, the proposed method solved 38 successfully, (95%) while classical DFP solved 2 problems (i.e. 05%). In terms of CPU time, the proposed method solved 29 out of the 40 problems given, (i.e.72.5%) successfully whereas classical DFP solves 11 (27.5%). The method is valid in terms of derivation, reliable in terms of number of iterations and accurate in terms of CPU time. Thus, suitable and achived the objective.
Skonieczna, Katarzyna; Styczyński, Jan; Krenska, Anna; Wysocki, Mariusz; Jakubowska, Aneta; Grzybowski, Tomasz
2016-01-01
Aim of the study: In recent years, RNA analysis has been increasingly used in clinical and forensic genetics. Nevertheless, a major limitation of RNA-based applications is very low RNA stability in biological material, due to the RNAse activity. This highlights the need for improving the methods of RNA collection and storage. Technological approaches such as FTA Classic Cards (Whatman) could provide a solution for the problem of RNA degradation. However, different methods of RNA isolation from FTA cards could have diverse effects on RNA quantity and quality. The purpose of this research was to analyze the utility of three different methods of RNA isolation from peripheral blood collected on FTA Classic Cards (Whatman). The study also aimed at assessing RNA stability in bloodstains deposited on FTA cards. Material and methods: The study was performed on peripheral bloodstains collected from 59 individuals on FTA Classic Cards (Whatman). RNA was isolated with High Pure RNA Isolation Kit (Roche Diagnostics), Universal RNA/miRNA Purification (EURx) and TRIzol Reagent (Life Technologies). RNA was subjected to quantitative analysis followed by reverse transcription and Real - Time PCR reaction. Results: The study has shown that FTA Classic Cards (Whatman) are useful tools for storing bloodstains at room temperature for RNA analysis. Moreover, the method of RNA extraction employing TRIzol Reagent (Life Technologies) provides the highest efficiency and reproducibility for samples stored for no more than 2 years. Conclusions: The FTA cards are suitable for collecting and storing bloodstains for RNA analysis in clinical and forensic genetics.
Fractional spectral and pseudo-spectral methods in unbounded domains: Theory and applications
NASA Astrophysics Data System (ADS)
Khosravian-Arab, Hassan; Dehghan, Mehdi; Eslahchi, M. R.
2017-06-01
This paper is intended to provide exponentially accurate Galerkin, Petrov-Galerkin and pseudo-spectral methods for fractional differential equations on a semi-infinite interval. We start our discussion by introducing two new non-classical Lagrange basis functions: NLBFs-1 and NLBFs-2 which are based on the two new families of the associated Laguerre polynomials: GALFs-1 and GALFs-2 obtained recently by the authors in [28]. With respect to the NLBFs-1 and NLBFs-2, two new non-classical interpolants based on the associated- Laguerre-Gauss and Laguerre-Gauss-Radau points are introduced and then fractional (pseudo-spectral) differentiation (and integration) matrices are derived. Convergence and stability of the new interpolants are proved in detail. Several numerical examples are considered to demonstrate the validity and applicability of the basis functions to approximate fractional derivatives (and integrals) of some functions. Moreover, the pseudo-spectral, Galerkin and Petrov-Galerkin methods are successfully applied to solve some physical ordinary differential equations of either fractional orders or integer ones. Some useful comments from the numerical point of view on Galerkin and Petrov-Galerkin methods are listed at the end.
A quantum–quantum Metropolis algorithm
Yung, Man-Hong; Aspuru-Guzik, Alán
2012-01-01
The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584
A Synthetic Approach to the Transfer Matrix Method in Classical and Quantum Physics
ERIC Educational Resources Information Center
Pujol, O.; Perez, J. P.
2007-01-01
The aim of this paper is to propose a synthetic approach to the transfer matrix method in classical and quantum physics. This method is an efficient tool to deal with complicated physical systems of practical importance in geometrical light or charged particle optics, classical electronics, mechanics, electromagnetics and quantum physics. Teaching…
Hybridizable discontinuous Galerkin method for the 2-D frequency-domain elastic wave equations
NASA Astrophysics Data System (ADS)
Bonnasse-Gahot, Marie; Calandra, Henri; Diaz, Julien; Lanteri, Stéphane
2018-04-01
Discontinuous Galerkin (DG) methods are nowadays actively studied and increasingly exploited for the simulation of large-scale time-domain (i.e. unsteady) seismic wave propagation problems. Although theoretically applicable to frequency-domain problems as well, their use in this context has been hampered by the potentially large number of coupled unknowns they incur, especially in the 3-D case, as compared to classical continuous finite element methods. In this paper, we address this issue in the framework of the so-called hybridizable discontinuous Galerkin (HDG) formulations. As a first step, we study an HDG method for the resolution of the frequency-domain elastic wave equations in the 2-D case. We describe the weak formulation of the method and provide some implementation details. The proposed HDG method is assessed numerically including a comparison with a classical upwind flux-based DG method, showing better overall computational efficiency as a result of the drastic reduction of the number of globally coupled unknowns in the resulting discrete HDG system.
Effects of tunnelling and asymmetry for system-bath models of electron transfer
NASA Astrophysics Data System (ADS)
Mattiat, Johann; Richardson, Jeremy O.
2018-03-01
We apply the newly derived nonadiabatic golden-rule instanton theory to asymmetric models describing electron-transfer in solution. The models go beyond the usual spin-boson description and have anharmonic free-energy surfaces with different values for the reactant and product reorganization energies. The instanton method gives an excellent description of the behaviour of the rate constant with respect to asymmetry for the whole range studied. We derive a general formula for an asymmetric version of the Marcus theory based on the classical limit of the instanton and find that this gives significant corrections to the standard Marcus theory. A scheme is given to compute this rate based only on equilibrium simulations. We also compare the rate constants obtained by the instanton method with its classical limit to study the effect of tunnelling and other quantum nuclear effects. These quantum effects can increase the rate constant by orders of magnitude.
An extension of the directed search domain algorithm to bilevel optimization
NASA Astrophysics Data System (ADS)
Wang, Kaiqiang; Utyuzhnikov, Sergey V.
2017-08-01
A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.
Neural Networks and other Techniques for Fault Identification and Isolation of Aircraft Systems
NASA Technical Reports Server (NTRS)
Innocenti, M.; Napolitano, M.
2003-01-01
Fault identification, isolation, and accomodation have become critical issues in the overall performance of advanced aircraft systems. Neural Networks have shown to be a very attractive alternative to classic adaptation methods for identification and control of non-linear dynamic systems. The purpose of this paper is to show the improvements in neural network applications achievable through the use of learning algorithms more efficient than the classic Back-Propagation, and through the implementation of the neural schemes in parallel hardware. The results of the analysis of a scheme for Sensor Failure, Detection, Identification and Accommodation (SFDIA) using experimental flight data of a research aircraft model are presented. Conventional approaches to the problem are based on observers and Kalman Filters while more recent methods are based on neural approximators. The work described in this paper is based on the use of neural networks (NNs) as on-line learning non-linear approximators. The performances of two different neural architectures were compared. The first architecture is based on a Multi Layer Perceptron (MLP) NN trained with the Extended Back Propagation algorithm (EBPA). The second architecture is based on a Radial Basis Function (RBF) NN trained with the Extended-MRAN (EMRAN) algorithms. In addition, alternative methods for communications links fault detection and accomodation are presented, relative to multiple unmanned aircraft applications.
NASA Astrophysics Data System (ADS)
Gharibnezhad, Fahit; Mujica, Luis E.; Rodellar, José
2015-01-01
Using Principal Component Analysis (PCA) for Structural Health Monitoring (SHM) has received considerable attention over the past few years. PCA has been used not only as a direct method to identify, classify and localize damages but also as a significant primary step for other methods. Despite several positive specifications that PCA conveys, it is very sensitive to outliers. Outliers are anomalous observations that can affect the variance and the covariance as vital parts of PCA method. Therefore, the results based on PCA in the presence of outliers are not fully satisfactory. As a main contribution, this work suggests the use of robust variant of PCA not sensitive to outliers, as an effective way to deal with this problem in SHM field. In addition, the robust PCA is compared with the classical PCA in the sense of detecting probable damages. The comparison between the results shows that robust PCA can distinguish the damages much better than using classical one, and even in many cases allows the detection where classic PCA is not able to discern between damaged and non-damaged structures. Moreover, different types of robust PCA are compared with each other as well as with classical counterpart in the term of damage detection. All the results are obtained through experiments with an aircraft turbine blade using piezoelectric transducers as sensors and actuators and adding simulated damages.
Klimkiewicz, Paulina; Klimkiewicz, Robert; Jankowska, Agnieszka; Kubsik, Anna; Widłak, Patrycja; Łukasiak, Adam; Janczewska, Katarzyna; Kociuga, Natalia; Nowakowski, Tomasz; Woldańska-Okońska, Marta
2018-01-01
Introduction: In this article, the authors focused on the symptoms of ischemic stroke and the effect of neurorehabilitation methods on the functional status of patients after ischemic stroke. The aim of the study was to evaluate and compare the functional status of patients after ischemic stroke with improved classic kinesiotherapy, classic kinesiotherapy and NDT-Bobath and classic kinesiotherapy and PNF. Materials and methods: The study involved 120 patients after ischemic stroke. Patients were treated in the Department of Rehabilitation and Physical Medicine USK of Medical University in Lodz. Patients were divided into 3 groups of 40 people. Group 1 was rehabilitated by classical kinesiotherapy. Group 2 was rehabilitated by classic kinesiotherapy and NTD-Bobath. Group 3 was rehabilitated by classical kinesiotherapy and PNF. In all patient groups, magnetostimulation was performed using the Viofor JPS System. The study was conducted twice: before treatment and immediately after 5 weeks after the therapy. The effects of applied neurorehabilitation methods were assessed on the basis of the Rivermead Motor Assessment (RMA). Results: In all three patient groups, functional improvement was achieved. However, a significantly higher improvement was observed in patients in the second group, enhanced with classical kinesitherapy and NDT-Bobath. Conclusions: The use of classical kinesiotherapy combined with the NDT-Bobath method is noticeably more effective in improving functional status than the use only classical kinesiotherapy or combination of classical kinesiotherapy and PNF patients after ischemic stroke.
Dose Equivalents for Second-Generation Antipsychotic Drugs: The Classical Mean Dose Method
Leucht, Stefan; Samara, Myrto; Heres, Stephan; Patel, Maxine X.; Furukawa, Toshi; Cipriani, Andrea; Geddes, John; Davis, John M.
2015-01-01
Background: The concept of dose equivalence is important for many purposes. The classical approach published by Davis in 1974 subsequently dominated textbooks for several decades. It was based on the assumption that the mean doses found in flexible-dose trials reflect the average optimum dose which can be used for the calculation of dose equivalence. We are the first to apply the method to second-generation antipsychotics. Methods: We searched for randomized, double-blind, flexible-dose trials in acutely ill patients with schizophrenia that examined 13 oral second-generation antipsychotics, haloperidol, and chlorpromazine (last search June 2014). We calculated the mean doses of each drug weighted by sample size and divided them by the weighted mean olanzapine dose to obtain olanzapine equivalents. Results: We included 75 studies with 16 555 participants. The doses equivalent to 1 mg/d olanzapine were: amisulpride 38.3 mg/d, aripiprazole 1.4 mg/d, asenapine 0.9 mg/d, chlorpromazine 38.9 mg/d, clozapine 30.6 mg/d, haloperidol 0.7 mg/d, quetiapine 32.3mg/d, risperidone 0.4mg/d, sertindole 1.1 mg/d, ziprasidone 7.9 mg/d, zotepine 13.2 mg/d. For iloperidone, lurasidone, and paliperidone no data were available. Conclusions: The classical mean dose method is not reliant on the limited availability of fixed-dose data at the lower end of the effective dose range, which is the major limitation of “minimum effective dose methods” and “dose-response curve methods.” In contrast, the mean doses found by the current approach may have in part depended on the dose ranges chosen for the original trials. Ultimate conclusions on dose equivalence of antipsychotics will need to be based on a review of various methods. PMID:25841041
Confidence of compliance: a Bayesian approach for percentile standards.
McBride, G B; Ellis, J C
2001-04-01
Rules for assessing compliance with percentile standards commonly limit the number of exceedances permitted in a batch of samples taken over a defined assessment period. Such rules are commonly developed using classical statistical methods. Results from alternative Bayesian methods are presented (using beta-distributed prior information and a binomial likelihood), resulting in "confidence of compliance" graphs. These allow simple reading of the consumer's risk and the supplier's risks for any proposed rule. The influence of the prior assumptions required by the Bayesian technique on the confidence results is demonstrated, using two reference priors (uniform and Jeffreys') and also using optimistic and pessimistic user-defined priors. All four give less pessimistic results than does the classical technique, because interpreting classical results as "confidence of compliance" actually invokes a Bayesian approach with an extreme prior distribution. Jeffreys' prior is shown to be the most generally appropriate choice of prior distribution. Cost savings can be expected using rules based on this approach.
Illés, Tamás
2011-03-01
The EOS system is a new medical imaging device based on low-dose X-rays, gaseous detectors and dedicated software for 3D reconstruction. It was developed by Nobel prizewinner Georges Charpak. A new concept--the vertebral vector--is used to facilitate the interpretation of EOS data, especially in the horizontal plane. We studied 95 cases of idiopathic scoliosis before and after surgery by means of classical methods and using vertebral vectors, in order to compare the accuracy of the two approaches. The vertebral vector permits simultaneous analysis of the scoliotic curvature in the frontal, sagittal and horizontal planes, as precisely as classical methods. The use of the vertebral vector simplifies and facilitates the interpretation of the mass of information provided by EOS. After analyzing the horizontal data, the first goal of corrective intervention would be to reduce the lateral vertebral deviation. The reduction in vertebral rotation seems less important. This is a new element in the therapeutic management of spinal deformations.
Schmitz, Guy; Kolar-Anić, Ljiljana Z; Anić, Slobodan R; Cupić, Zeljko D
2008-12-25
The stoichiometric network analysis (SNA) introduced by B. L. Clarke is applied to a simplified model of the complex oscillating Bray-Liebhafsky reaction under batch conditions, which was not examined by this method earlier. This powerful method for the analysis of steady-states stability is also used to transform the classical differential equations into dimensionless equations. This transformation is easy and leads to a form of the equations combining the advantages of classical dimensionless equations with the advantages of the SNA. The used dimensionless parameters have orders of magnitude given by the experimental information about concentrations and currents. This simplifies greatly the study of the slow manifold and shows which parameters are essential for controlling its shape and consequently have an important influence on the trajectories. The effectiveness of these equations is illustrated on two examples: the study of the bifurcations points and a simple sensitivity analysis, different from the classical one, more based on the chemistry of the studied system.
A genetic graph-based approach for partitional clustering.
Menéndez, Héctor D; Barrero, David F; Camacho, David
2014-05-01
Clustering is one of the most versatile tools for data analysis. In the recent years, clustering that seeks the continuity of data (in opposition to classical centroid-based approaches) has attracted an increasing research interest. It is a challenging problem with a remarkable practical interest. The most popular continuity clustering method is the spectral clustering (SC) algorithm, which is based on graph cut: It initially generates a similarity graph using a distance measure and then studies its graph spectrum to find the best cut. This approach is sensitive to the parameters of the metric, and a correct parameter choice is critical to the quality of the cluster. This work proposes a new algorithm, inspired by SC, that reduces the parameter dependency while maintaining the quality of the solution. The new algorithm, named genetic graph-based clustering (GGC), takes an evolutionary approach introducing a genetic algorithm (GA) to cluster the similarity graph. The experimental validation shows that GGC increases robustness of SC and has competitive performance in comparison with classical clustering methods, at least, in the synthetic and real dataset used in the experiments.
Sector-Based Detection for Hands-Free Speech Enhancement in Cars
NASA Astrophysics Data System (ADS)
Lathoud, Guillaume; Bourgeois, Julien; Freudenberger, Jürgen
2006-12-01
Adaptation control of beamforming interference cancellation techniques is investigated for in-car speech acquisition. Two efficient adaptation control methods are proposed that avoid target cancellation. The "implicit" method varies the step-size continuously, based on the filtered output signal. The "explicit" method decides in a binary manner whether to adapt or not, based on a novel estimate of target and interference energies. It estimates the average delay-sum power within a volume of space, for the same cost as the classical delay-sum. Experiments on real in-car data validate both methods, including a case with[InlineEquation not available: see fulltext.] km/h background road noise.
Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants.
Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna
2016-06-27
This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated.
Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants
Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna
2016-01-01
This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated. PMID:27355949
An algebraic method for constructing stable and consistent autoregressive filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu
2015-02-15
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
Supervised Learning Based Hypothesis Generation from Biomedical Literature.
Sang, Shengtian; Yang, Zhihao; Li, Zongyao; Lin, Hongfei
2015-01-01
Nowadays, the amount of biomedical literatures is growing at an explosive speed, and there is much useful knowledge undiscovered in this literature. Researchers can form biomedical hypotheses through mining these works. In this paper, we propose a supervised learning based approach to generate hypotheses from biomedical literature. This approach splits the traditional processing of hypothesis generation with classic ABC model into AB model and BC model which are constructed with supervised learning method. Compared with the concept cooccurrence and grammar engineering-based approaches like SemRep, machine learning based models usually can achieve better performance in information extraction (IE) from texts. Then through combining the two models, the approach reconstructs the ABC model and generates biomedical hypotheses from literature. The experimental results on the three classic Swanson hypotheses show that our approach outperforms SemRep system.
Portfolio Analysis for Vector Calculus
ERIC Educational Resources Information Center
Kaplan, Samuel R.
2015-01-01
Classic stock portfolio analysis provides an applied context for Lagrange multipliers that undergraduate students appreciate. Although modern methods of portfolio analysis are beyond the scope of vector calculus, classic methods reinforce the utility of this material. This paper discusses how to introduce classic stock portfolio analysis in a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamada, Atsushi; Kojima, Hidekazu; Okazaki, Susumu, E-mail: okazaki@apchem.nagoya-u.ac.jp
2014-08-28
In order to investigate proton transfer reaction in solution, mixed quantum-classical molecular dynamics calculations have been carried out based on our previously proposed quantum equation of motion for the reacting system [A. Yamada and S. Okazaki, J. Chem. Phys. 128, 044507 (2008)]. Surface hopping method was applied to describe forces acting on the solvent classical degrees of freedom. In a series of our studies, quantum and solvent effects on the reaction dynamics in solutions have been analysed in detail. Here, we report our mixed quantum-classical molecular dynamics calculations for intramolecular proton transfer of malonaldehyde in water. Thermally activated proton transfermore » process, i.e., vibrational excitation in the reactant state followed by transition to the product state and vibrational relaxation in the product state, as well as tunneling reaction can be described by solving the equation of motion. Zero point energy is, of course, included, too. The quantum simulation in water has been compared with the fully classical one and the wave packet calculation in vacuum. The calculated quantum reaction rate in water was 0.70 ps{sup −1}, which is about 2.5 times faster than that in vacuum, 0.27 ps{sup −1}. This indicates that the solvent water accelerates the reaction. Further, the quantum calculation resulted in the reaction rate about 2 times faster than the fully classical calculation, which indicates that quantum effect enhances the reaction rate, too. Contribution from three reaction mechanisms, i.e., tunneling, thermal activation, and barrier vanishing reactions, is 33:46:21 in the mixed quantum-classical calculations. This clearly shows that the tunneling effect is important in the reaction.« less
A New Interpretation of Augmented Subscores and Their Added Value in Terms of Parallel Forms
ERIC Educational Resources Information Center
Sinharay, Sandip
2018-01-01
The value-added method of Haberman is arguably one of the most popular methods to evaluate the quality of subscores. The method is based on the classical test theory and deems a subscore to be of added value if the subscore predicts the corresponding true subscore better than does the total score. Sinharay provided an interpretation of the added…
NASA Astrophysics Data System (ADS)
Brambilla, Marco; Ceri, Stefano; Valle, Emanuele Della; Facca, Federico M.; Tziviskou, Christina
Although Semantic Web Services are expected to produce a revolution in the development of Web-based systems, very few enterprise-wide design experiences are available; one of the main reasons is the lack of sound Software Engineering methods and tools for the deployment of Semantic Web applications. In this chapter, we present an approach to software development for the Semantic Web based on classical Software Engineering methods (i.e., formal business process development, computer-aided and component-based software design, and automatic code generation) and on semantic methods and tools (i.e., ontology engineering, semantic service annotation and discovery).
Quantum-optical coherence tomography with classical light.
Lavoie, J; Kaltenbaek, R; Resch, K J
2009-03-02
Quantum-optical coherence tomography (Q-OCT) is an interferometric technique for axial imaging offering several advantages over conventional methods. Chirped-pulse interferometry (CPI) was recently demonstrated to exhibit all of the benefits of the quantum interferometer upon which Q-OCT is based. Here we use CPI to measure axial interferograms to profile a sample accruing the important benefits of Q-OCT, including automatic dispersion cancellation, but with 10 million times higher signal. Our technique solves the artifact problem in Q-OCT and highlights the power of classical correlation in optical imaging.
Multiview road sign detection via self-adaptive color model and shape context matching
NASA Astrophysics Data System (ADS)
Liu, Chunsheng; Chang, Faliang; Liu, Chengyun
2016-09-01
The multiview appearance of road signs in uncontrolled environments has made the detection of road signs a challenging problem in computer vision. We propose a road sign detection method to detect multiview road signs. This method is based on several algorithms, including the classical cascaded detector, the self-adaptive weighted Gaussian color model (SW-Gaussian model), and a shape context matching method. The classical cascaded detector is used to detect the frontal road signs in video sequences and obtain the parameters for the SW-Gaussian model. The proposed SW-Gaussian model combines the two-dimensional Gaussian model and the normalized red channel together, which can largely enhance the contrast between the red signs and background. The proposed shape context matching method can match shapes with big noise, which is utilized to detect road signs in different directions. The experimental results show that compared with previous detection methods, the proposed multiview detection method can reach higher detection rate in detecting signs with different directions.
NASA Astrophysics Data System (ADS)
Caballero, Marcos D.; Doughty, Leanne; Turnbull, Anna M.; Pepper, Rachel E.; Pollock, Steven J.
2017-06-01
Reliable and validated assessments of introductory physics have been instrumental in driving curricular and pedagogical reforms that lead to improved student learning. As part of an effort to systematically improve our sophomore-level classical mechanics and math methods course (CM 1) at CU Boulder, we have developed a tool to assess student learning of CM 1 concepts in the upper division. The Colorado Classical Mechanics and Math Methods Instrument (CCMI) builds on faculty consensus learning goals and systematic observations of student difficulties. The result is a 9-question open-ended post test that probes student learning in the first half of a two-semester classical mechanics and math methods sequence. In this paper, we describe the design and development of this instrument, its validation, and measurements made in classes at CU Boulder and elsewhere.
Augmented classical least squares multivariate spectral analysis
Haaland, David M.; Melgaard, David K.
2004-02-03
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-07-26
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-01-11
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Quasi-classical approaches to vibronic spectra revisited
NASA Astrophysics Data System (ADS)
Karsten, Sven; Ivanov, Sergei D.; Bokarev, Sergey I.; Kühn, Oliver
2018-03-01
The framework to approach quasi-classical dynamics in the electronic ground state is well established and is based on the Kubo-transformed time correlation function (TCF), being the most classical-like quantum TCF. Here we discuss whether the choice of the Kubo-transformed TCF as a starting point for simulating vibronic spectra is as unambiguous as it is for vibrational ones. Employing imaginary-time path integral techniques in combination with the interaction representation allowed us to formulate a method for simulating vibronic spectra in the adiabatic regime that takes nuclear quantum effects and dynamics on multiple potential energy surfaces into account. Further, a generalized quantum TCF is proposed that contains many well-established TCFs, including the Kubo one, as particular cases. Importantly, it also provides a framework to construct new quantum TCFs. Applying the developed methodology to the generalized TCF leads to a plethora of simulation protocols, which are based on the well-known TCFs as well as on new ones. Their performance is investigated on 1D anharmonic model systems at finite temperatures. It is shown that the protocols based on the new TCFs may lead to superior results with respect to those based on the common ones. The strategies to find the optimal approach are discussed.
Highly Accurate Beam Torsion Solutions Using the p-Version Finite Element Method
NASA Technical Reports Server (NTRS)
Smith, James P.
1996-01-01
A new treatment of the classical beam torsion boundary value problem is applied. Using the p-version finite element method with shape functions based on Legendre polynomials, torsion solutions for generic cross-sections comprised of isotropic materials are developed. Element shape functions for quadrilateral and triangular elements are discussed, and numerical examples are provided.
Analysis of Added Value of Subscores with Respect to Classification
ERIC Educational Resources Information Center
Sinharay, Sandip
2014-01-01
Brennan noted that users of test scores often want (indeed, demand) that subscores be reported, along with total test scores, for diagnostic purposes. Haberman suggested a method based on classical test theory (CTT) to determine if subscores have added value over the total score. One way to interpret the method is that a subscore has added value…
Abdel-Halim, Lamia M; Abd-El Rahman, Mohamed K; Ramadan, Nesrin K; El Sanabary, Hoda F A; Salem, Maissa Y
2016-04-15
A comparative study was developed between two classical spectrophotometric methods (dual wavelength method and Vierordt's method) and two recent methods manipulating ratio spectra (ratio difference method and first derivative of ratio spectra method) for simultaneous determination of Antazoline hydrochloride (AN) and Tetryzoline hydrochloride (TZ) in their combined pharmaceutical formulation and in the presence of benzalkonium chloride as a preservative without preliminary separation. The dual wavelength method depends on choosing two wavelengths for each drug in a way so that the difference in absorbance at those two wavelengths is zero for the other drug. While Vierordt's method, is based upon measuring the absorbance and the absorptivity values of the two drugs at their λ(max) (248.0 and 219.0 nm for AN and TZ, respectively), followed by substitution in the corresponding Vierordt's equation. Recent methods manipulating ratio spectra depend on either measuring the difference in amplitudes of ratio spectra between 255.5 and 269.5 nm for AN and 220.0 and 273.0 nm for TZ in case of ratio difference method or computing first derivative of the ratio spectra for each drug then measuring the peak amplitude at 250.0 nm for AN and at 224.0 nm for TZ in case of first derivative of ratio spectrophotometry. The specificity of the developed methods was investigated by analyzing different laboratory prepared mixtures of the two drugs. All methods were applied successfully for the determination of the selected drugs in their combined dosage form proving that the classical spectrophotometric methods can still be used successfully in analysis of binary mixture using minimal data manipulation rather than recent methods which require relatively more steps. Furthermore, validation of the proposed methods was performed according to ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Statistical studies showed that the methods can be competitively applied in quality control laboratories. Copyright © 2016 Elsevier B.V. All rights reserved.
Dietze, Klaas; Tucakov, Anna; Engel, Tatjana; Wirtz, Sabine; Depner, Klaus; Globig, Anja; Kammerer, Robert; Mouchantat, Susan
2017-01-05
Non-invasive sampling techniques based on the analysis of oral fluid specimen have gained substantial importance in the field of swine herd management. Methodological advances have a focus on endemic viral diseases in commercial pig production. More recently, these approaches have been adapted to non-invasive sampling of wild boar for transboundary animal disease detection for which these effective population level sampling methods have not been available. In this study, a rope-in-a-bait based oral fluid sampling technique was tested to detect classical swine fever virus nucleic acid shedding from experimentally infected domestic pigs. Separated in two groups treated identically, the course of the infection was slightly differing in terms of onset of the clinical signs and levels of viral ribonucleic acid detection in the blood and oral fluid. The technique was capable of detecting classical swine fever virus nucleic acid as of day 7 post infection coinciding with the first detection in conventional oropharyngeal swab samples from some individual animals. Except for day 7 post infection in the "slower onset group", the chances of classical swine fever virus nucleic acid detection in ropes were identical or higher as compared to the individual sampling. With the provided evidence, non-invasive oral fluid sampling at group level can be considered as additional cost-effective detection tool in classical swine fever prevention and control strategies. The proposed methodology is of particular use in production systems with reduced access to veterinary services such as backyard or scavenging pig production where it can be integrated in feeding or baiting practices.
On the classical and quantum integrability of systems of resonant oscillators
NASA Astrophysics Data System (ADS)
Marino, Massimo
2017-01-01
We study in this paper systems of harmonic oscillators with resonant frequencies. For these systems we present general procedures for the construction of sets of functionally independent constants of motion, which can be used for the definition of generalized actionangle variables, in accordance with the general description of degenerate integrable systems which was presented by Nekhoroshev in a seminal paper in 1972. We then apply to these classical integrable systems the procedure of quantization which has been proposed to the author by Nekhoroshev during his last years of activity at Milan University. This procedure is based on the construction of linear operators by means of the symmetrization of the classical constants of motion mentioned above. For 3 oscillators with resonance 1: 1: 2, by using a computer program we have discovered an exceptional integrable system, which cannot be obtained with the standard methods based on the obvious symmetries of the Hamiltonian function. In this exceptional case, quantum integrability can be realized only by means of a modification of the symmetrization procedure.
NASA Astrophysics Data System (ADS)
Ceballos, G. A.; Hernández, L. F.
2015-04-01
Objective. The classical ERP-based speller, or P300 Speller, is one of the most commonly used paradigms in the field of Brain Computer Interfaces (BCI). Several alterations to the visual stimuli presentation system have been developed to avoid unfavorable effects elicited by adjacent stimuli. However, there has been little, if any, regard to useful information contained in responses to adjacent stimuli about spatial location of target symbols. This paper aims to demonstrate that combining the classification of non-target adjacent stimuli with standard classification (target versus non-target) significantly improves classical ERP-based speller efficiency. Approach. Four SWLDA classifiers were trained and combined with the standard classifier: the lower row, upper row, right column and left column classifiers. This new feature extraction procedure and the classification method were carried out on three open databases: the UAM P300 database (Universidad Autonoma Metropolitana, Mexico), BCI competition II (dataset IIb) and BCI competition III (dataset II). Main results. The inclusion of the classification of non-target adjacent stimuli improves target classification in the classical row/column paradigm. A gain in mean single trial classification of 9.6% and an overall improvement of 25% in simulated spelling speed was achieved. Significance. We have provided further evidence that the ERPs produced by adjacent stimuli present discriminable features, which could provide additional information about the spatial location of intended symbols. This work promotes the searching of information on the peripheral stimulation responses to improve the performance of emerging visual ERP-based spellers.
Recent advances in Lanczos-based iterative methods for nonsymmetric linear systems
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Golub, Gene H.; Nachtigal, Noel M.
1992-01-01
In recent years, there has been a true revival of the nonsymmetric Lanczos method. On the one hand, the possible breakdowns in the classical algorithm are now better understood, and so-called look-ahead variants of the Lanczos process have been developed, which remedy this problem. On the other hand, various new Lanczos-based iterative schemes for solving nonsymmetric linear systems have been proposed. This paper gives a survey of some of these recent developments.
NASA Astrophysics Data System (ADS)
Ciancio, P. M.; Rossit, C. A.; Laura, P. A. A.
2007-05-01
This study is concerned with the vibration analysis of a cantilevered rectangular anisotropic plate when a concentrated mass is rigidly attached to its center point. Based on the classical theory of anisotropic plates, the Ritz method is employed to perform the analysis. The deflection of the plate is approximated by a set of beam functions in each principal coordinate direction. The influence of the mass magnitude on the natural frequencies and modal shapes of vibration is studied for a boron-epoxy plate and also in the case of a generic anisotropic material. The classical Ritz method with beam functions as the spatial approximation proved to be a suitable procedure to solve a problem of this analytical complexity.
NASA Astrophysics Data System (ADS)
Ningrum, R. W.; Surarso, B.; Farikhin; Safarudin, Y. M.
2018-03-01
This paper proposes the combination of Firefly Algorithm (FA) and Chen Fuzzy Time Series Forecasting. Most of the existing fuzzy forecasting methods based on fuzzy time series use the static length of intervals. Therefore, we apply an artificial intelligence, i.e., Firefly Algorithm (FA) to set non-stationary length of intervals for each cluster on Chen Method. The method is evaluated by applying on the Jakarta Composite Index (IHSG) and compare with classical Chen Fuzzy Time Series Forecasting. Its performance verified through simulation using Matlab.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nedic, Vladimir, E-mail: vnedic@kg.ac.rs; Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs; Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs
2014-11-15
Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. Themore » output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.« less
Twelve automated thresholding methods for segmentation of PET images: a phantom study.
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M
2012-06-21
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Twelve automated thresholding methods for segmentation of PET images: a phantom study
NASA Astrophysics Data System (ADS)
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.
2012-06-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Greek classicism in living structure? Some deductive pathways in animal morphology.
Zweers, G A
1985-01-01
Classical temples in ancient Greece show two deterministic illusionistic principles of architecture, which govern their functional design: geometric proportionalism and a set of illusion-strengthening rules in the proportionalism's "stochastic margin". Animal morphology, in its mechanistic-deductive revival, applies just one architectural principle, which is not always satisfactory. Whether a "Greek Classical" situation occurs in the architecture of living structure is to be investigated by extreme testing with deductive methods. Three deductive methods for explanation of living structure in animal morphology are proposed: the parts, the compromise, and the transformation deduction. The methods are based upon the systems concept for an organism, the flow chart for a functionalistic picture, and the network chart for a structuralistic picture, whereas the "optimal design" serves as the architectural principle for living structure. These methods show clearly the high explanatory power of deductive methods in morphology, but they also make one open end most explicit: neutral issues do exist. Full explanation of living structure asks for three entries: functional design within architectural and transformational constraints. The transformational constraint brings necessarily in a stochastic component: an at random variation being a sort of "free management space". This variation must be a variation from the deterministic principle of the optimal design, since any transformation requires space for plasticity in structure and action, and flexibility in role fulfilling. Nevertheless, finally the question comes up whether for animal structure a similar situation exists as in Greek Classical temples. This means that the at random variation, that is found when the optimal design is used to explain structure, comprises apart from a stochastic part also real deviations being yet another deterministic part. This deterministic part could be a set of rules that governs actualization in the "free management space".
ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh
2015-01-01
Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729
Polyhedral realizations of crystal bases for quantum algebras of classical affine types
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoshino, A.
2013-05-15
We give the explicit forms of the crystal bases B({infinity}) for the quantum affine algebras of types A{sub 2n-1}{sup (2)}, A{sub 2n}{sup (2)}, B{sub n}{sup (1)}, C{sub n}{sup (1)}, D{sub n}{sup (1)}, and D{sub n+1}{sup (2)} by using the method of polyhedral realizations of crystal bases.
ERIC Educational Resources Information Center
Moraes, Edgar P.; da Silva, Nilbert S. A.; de Morais, Camilo de L. M.; das Neves, Luiz S.; de Lima, Kassio M. G.
2014-01-01
The flame test is a classical analytical method that is often used to teach students how to identify specific metals. However, some universities in developing countries have difficulties acquiring the sophisticated instrumentation needed to demonstrate how to identify and quantify metals. In this context, a method was developed based on the flame…
Performance Comparison of Superresolution Array Processing Algorithms. Revised
1998-06-15
plane waves is finite is the MUSIC algorithm [16]. MUSIC , which denotes Multiple Signal Classification, is an extension of the method of Pisarenko [18... MUSIC Is but one member of a class of methods based upon the decomposition of covariance data into eigenvectors and eigenvalues. Such techniques...techniques relative to the classical methods, however, results for MUSIC are included in this report. All of the techniques reviewed have application to
NASA Astrophysics Data System (ADS)
Oana, Catrina; Parding, Kajsa Maria; Stefan, Sabina
2017-04-01
The importance of knowledge on the trajectories that Mediterranean cyclones follows toward Romania is fundamental because most of the times the weather phenomena that accompany them determine significant economic damage and not only. In the specialized literature, the principal classic trajectories on which the Mediterranean cyclones pass toward the south-east of Europe and by default toward Romania, causing in these areas a crucial weather conditions change in all aspects at any time during the year, have been determined in subjectively mode, many years ago, by C. Sorodoc (1962) E. I. Bordei (1983). Starting from the known 9 classic trajectories determined subjectively, in this study it was aimed and subsequently carried out their identification by this date, but objectively, using the method based on mathematic algorithms developed by Rasmus E. Benestad, Abdelkader Mezghani, and Kajsa M. Parding (2006). The study was carried out between January 2003 and December 2015, taking into account the fact that the presence of the Mediterranean cyclones may be established almost every month, these representing important links of the atmosphere movement over Europe. The data used by the daily review have contained values, in grid points, of the mean pressure field at sea level (MSLP), with spatial resolution of 0.75° x 0.75° and 6 hours temporal coverage, originating from ECMWF, ERA-Interim project (2006), and the chosen field of interest was between 15°W - 40°E and 30°N - 50°N. Of the total number of Mediterranean cyclones identified objectively, that followed trajectories toward Romania, were randomly selected only a few cases, which indicates the similarity between the paths of classic subjectively determined and those determined objectively. Validation of the results consisted in the first phase in a comparison between the trajectories identified with the classic trajectories determined subjectively, then was carried out a second validation, by analysis of the MSLP field, geopotential height and potential vorticity. As a conclusion, the results obtained highlights certainly reliability but especially the usefulness of the objective method used, in particular in carrying out the complex Mediterranean climatology studies and not only.
NASA Astrophysics Data System (ADS)
Miranda Guedes, Rui
2018-02-01
Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.
Yang, Jianhong; Li, Xiaomeng; Xu, Jinwu; Ma, Xianghong
2018-01-01
The quantitative analysis accuracy of calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is severely affected by the self-absorption effect and estimation of plasma temperature. Herein, a CF-LIBS quantitative analysis method based on the auto-selection of internal reference line and the optimized estimation of plasma temperature is proposed. The internal reference line of each species is automatically selected from analytical lines by a programmable procedure through easily accessible parameters. Furthermore, the self-absorption effect of the internal reference line is considered during the correction procedure. To improve the analysis accuracy of CF-LIBS, the particle swarm optimization (PSO) algorithm is introduced to estimate the plasma temperature based on the calculation results from the Boltzmann plot. Thereafter, the species concentrations of a sample can be calculated according to the classical CF-LIBS method. A total of 15 certified alloy steel standard samples of known compositions and elemental weight percentages were used in the experiment. Using the proposed method, the average relative errors of Cr, Ni, and Fe calculated concentrations were 4.40%, 6.81%, and 2.29%, respectively. The quantitative results demonstrated an improvement compared with the classical CF-LIBS method and the promising potential of in situ and real-time application.
Quantum-classical correspondence for the inverted oscillator
NASA Astrophysics Data System (ADS)
Maamache, Mustapha; Ryeol Choi, Jeong
2017-11-01
While quantum-classical correspondence for a system is a very fundamental problem in modern physics, the understanding of its mechanism is often elusive, so the methods used and the results of detailed theoretical analysis have been accompanied by active debate. In this study, the differences and similarities between quantum and classical behavior for an inverted oscillator have been analyzed based on the description of a complete generalized Airy function-type quantum wave solution. The inverted oscillator model plays an important role in several branches of cosmology and particle physics. The quantum wave packet of the system is composed of many sub-packets that are localized at different positions with regular intervals between them. It is shown from illustrations of the probability density that, although the quantum trajectory of the wave propagation is somewhat different from the corresponding classical one, the difference becomes relatively small when the classical excitation is sufficiently high. We have confirmed that a quantum wave packet moving along a positive or negative direction accelerates over time like a classical wave. From these main interpretations and others in the text, we conclude that our theory exquisitely illustrates quantum and classical correspondence for the system, which is a crucial concept in quantum mechanics. Supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2016R1D1A1A09919503)
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
A comparative study of different methods for calculating electronic transition rates
NASA Astrophysics Data System (ADS)
Kananenka, Alexei A.; Sun, Xiang; Schubert, Alexander; Dunietz, Barry D.; Geva, Eitan
2018-03-01
We present a comprehensive comparison of the following mixed quantum-classical methods for calculating electronic transition rates: (1) nonequilibrium Fermi's golden rule, (2) mixed quantum-classical Liouville method, (3) mean-field (Ehrenfest) mixed quantum-classical method, and (4) fewest switches surface-hopping method (in diabatic and adiabatic representations). The comparison is performed on the Garg-Onuchic-Ambegaokar benchmark charge-transfer model, over a broad range of temperatures and electronic coupling strengths, with different nonequilibrium initial states, in the normal and inverted regimes. Under weak to moderate electronic coupling, the nonequilibrium Fermi's golden rule rates are found to be in good agreement with the rates obtained via the mixed quantum-classical Liouville method that coincides with the fully quantum-mechanically exact results for the model system under study. Our results suggest that the nonequilibrium Fermi's golden rule can serve as an inexpensive yet accurate alternative to Ehrenfest and the fewest switches surface-hopping methods.
Zahabiun, Farzaneh; Sadjjadi, Seyed Mahmoud; Esfandiari, Farideh
2015-01-01
Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Using this method is cost effective and fast for mounting of small nematodes comparing to classic method.
NASA Astrophysics Data System (ADS)
Aubry, R.; Oñate, E.; Idelsohn, S. R.
2006-09-01
The method presented in Aubry et al. (Comput Struc 83:1459-1475, 2005) for the solution of an incompressible viscous fluid flow with heat transfer using a fully Lagrangian description of motion is extended to three dimensions (3D) with particular emphasis on mass conservation. A modified fractional step (FS) based on the pressure Schur complement (Turek 1999), and related to the class of algebraic splittings Quarteroni et al. (Comput Methods Appl Mech Eng 188:505-526, 2000), is used and a new advantage of the splittings of the equations compared with the classical FS is highlighted for free surface problems. The temperature is semi-coupled with the displacement, which is the main variable in a Lagrangian description. Comparisons for various mesh Reynolds numbers are performed with the classical FS, an algebraic splitting and a monolithic solution, in order to illustrate the behaviour of the Uzawa operator and the mass conservation. As the classical fractional step is equivalent to one iteration of the Uzawa algorithm performed with a standard Laplacian as a preconditioner, it will behave well only in a Reynold mesh number domain where the preconditioner is efficient. Numerical results are provided to assess the superiority of the modified algebraic splitting to the classical FS.
Computational Insights into Materials and Interfaces for Capacitive Energy Storage
Zhan, Cheng; Lian, Cheng; Zhang, Yu; ...
2017-04-24
Supercapacitors such as electric double-layer capacitors (EDLCs) and pseudocapacitors are becoming increasingly important in the field of electrical energy storage. Theoretical study of energy storage in EDLCs focuses on solving for the electric double-layer structure in different electrode geometries and electrolyte components, which can be achieved by molecular simulations such as classical molecular dynamics (MD), classical density functional theory (classical DFT), and Monte-Carlo (MC) methods. In recent years, combining first-principles and classical simulations to investigate the carbon-based EDLCs has shed light on the importance of quantum capacitance in graphene-like 2D systems. More recently, the development of joint density functional theorymore » (JDFT) enables self-consistent electronic-structure calculation for an electrode being solvated by an electrolyte. In contrast with the large amount of theoretical and computational effort on EDLCs, theoretical understanding of pseudocapacitance is very limited. In this review, we first introduce popular modeling methods and then focus on several important aspects of EDLCs including nanoconfinement, quantum capacitance, dielectric screening, and novel 2D electrode design; we also briefly touch upon pseudocapactive mechanism in RuO 2. We summarize and conclude with an outlook for the future of materials simulation and design for capacitive energy storage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collee, R.; Govaerts, J.; Winand, L.
1959-10-31
A brief resume of the classical methods of quantitative determination of thorium in ores and thoriferous products is given to show that a rapid, accurate, and precise physical method based on the radioactivity of thorium would be of great utility. A method based on the utilization of the characteristic spectrum of the thorium gamma radiation is presented. The preparation of the samples and the instruments needed for the measurements is discussed. The experimental results show that the reproducibility is very satisfactory and that it is possible to detect Th contents of 1% or smaller. (J.S.R.)
Measuring and Modeling Cosmic Ray Showers with an MBL System: An Undergraduate Project.
ERIC Educational Resources Information Center
Jackson, David P.; Welker, Matthew T.
2001-01-01
Describes a novel method for inducing and measuring cosmic ray showers using a low-cost, microcomputer-based laboratory system. Uses low counting-rate radiation monitors in the reproduction of Bruno Rossi's classic experiment. (Contains 16 references.) (Author/YDS)
An Arbitrary First Order Theory Can Be Represented by a Program: A Theorem
NASA Technical Reports Server (NTRS)
Hosheleva, Olga
1997-01-01
How can we represent knowledge inside a computer? For formalized knowledge, classical logic seems to be the most adequate tool. Classical logic is behind all formalisms of classical mathematics, and behind many formalisms used in Artificial Intelligence. There is only one serious problem with classical logic: due to the famous Godel's theorem, classical logic is algorithmically undecidable; as a result, when the knowledge is represented in the form of logical statements, it is very difficult to check whether, based on this statement, a given query is true or not. To make knowledge representations more algorithmic, a special field of logic programming was invented. An important portion of logic programming is algorithmically decidable. To cover knowledge that cannot be represented in this portion, several extensions of the decidable fragments have been proposed. In the spirit of logic programming, these extensions are usually introduced in such a way that even if a general algorithm is not available, good heuristic methods exist. It is important to check whether the already proposed extensions are sufficient, or further extensions is necessary. In the present paper, we show that one particular extension, namely, logic programming with classical negation, introduced by M. Gelfond and V. Lifschitz, can represent (in some reasonable sense) an arbitrary first order logical theory.
ERIC Educational Resources Information Center
Klinger, Don A.; Rogers, W. Todd
2003-01-01
The estimation accuracy of procedures based on classical test score theory and item response theory (generalized partial credit model) were compared for examinations consisting of multiple-choice and extended-response items. Analysis of British Columbia Scholarship Examination results found an error rate of about 10 percent for both methods, with…
ERIC Educational Resources Information Center
Chapman, Jason E.; Sheidow, Ashli J.; Henggeler, Scott W.; Halliday-Boykins, Colleen A.; Cunningham, Phillippe B.
2008-01-01
A unique application of the Many-Facet Rasch Model (MFRM) is introduced as the preferred method for evaluating the psychometric properties of a measure of therapist adherence to Contingency Management (CM) treatment of adolescent substance use. The utility of psychometric methods based in Classical Test Theory was limited by complexities of the…
Improved Method of Manufacturing SiC Devices
NASA Technical Reports Server (NTRS)
Okojie, Robert S.
2005-01-01
The phrase, "common-layered architecture for semiconductor silicon carbide" ("CLASSiC") denotes a method of batch fabrication of microelectromechanical and semiconductor devices from bulk silicon carbide. CLASSiC is the latest in a series of related methods developed in recent years in continuing efforts to standardize SiC-fabrication processes. CLASSiC encompasses both institutional and technological innovations that can be exploited separately or in combination to make the manufacture of SiC devices more economical. Examples of such devices are piezoresistive pressure sensors, strain gauges, vibration sensors, and turbulence-intensity sensors for use in harsh environments (e.g., high-temperature, high-pressure, corrosive atmospheres). The institutional innovation is to manufacture devices for different customers (individuals, companies, and/or other entities) simultaneously in the same batch. This innovation is based on utilization of the capability for fabrication, on the same substrate, of multiple SiC devices having different functionalities (see figure). Multiple customers can purchase shares of the area on the same substrate, each customer s share being apportioned according to the customer s production-volume requirement. This makes it possible for multiple customers to share costs in a common foundry, so that the capital equipment cost per customer in the inherently low-volume SiC-product market can be reduced significantly. One of the technological innovations is a five-mask process that is based on an established set of process design rules. The rules provide for standardization of the fabrication process, yet are flexible enough to enable multiple customers to lay out masks for their portions of the SiC substrate to provide for simultaneous batch fabrication of their various devices. In a related prior method, denoted multi-user fabrication in silicon carbide (MUSiC), the fabrication process is based largely on surface micromachining of poly SiC. However, in MUSiC one cannot exploit the superior sensing, thermomechanical, and electrical properties of single-crystal 6H-SiC or 4H-SiC. As a complement to MUSiC, the CLASSiC five-mask process can be utilized to fabricate multiple devices in bulk single-crystal SiC of any polytype. The five-mask process makes fabrication less complex because it eliminates the need for large-area deposition and removal of sacrificial material. Other innovations in CLASSiC pertain to selective etching of indium tin oxide and aluminum in connection with multilayer metallization. One major characteristic of bulk micromachined microelectromechanical devices is the presence of three-dimensional (3D) structures. Any 3D recesses that already exist at a given step in a fabrication process usually make it difficult to apply a planar coat of photoresist for metallization and other subsequent process steps. To overcome this difficulty, the CLASSiC process includes a reversal of part of the conventional flow: Metallization is performed before the recesses are etched.
2010-01-01
Background Patients-Reported Outcomes (PRO) are increasingly used in clinical and epidemiological research. Two main types of analytical strategies can be found for these data: classical test theory (CTT) based on the observed scores and models coming from Item Response Theory (IRT). However, whether IRT or CTT would be the most appropriate method to analyse PRO data remains unknown. The statistical properties of CTT and IRT, regarding power and corresponding effect sizes, were compared. Methods Two-group cross-sectional studies were simulated for the comparison of PRO data using IRT or CTT-based analysis. For IRT, different scenarios were investigated according to whether items or person parameters were assumed to be known, to a certain extent for item parameters, from good to poor precision, or unknown and therefore had to be estimated. The powers obtained with IRT or CTT were compared and parameters having the strongest impact on them were identified. Results When person parameters were assumed to be unknown and items parameters to be either known or not, the power achieved using IRT or CTT were similar and always lower than the expected power using the well-known sample size formula for normally distributed endpoints. The number of items had a substantial impact on power for both methods. Conclusion Without any missing data, IRT and CTT seem to provide comparable power. The classical sample size formula for CTT seems to be adequate under some conditions but is not appropriate for IRT. In IRT, it seems important to take account of the number of items to obtain an accurate formula. PMID:20338031
Fonteyne, Margot; Gildemyn, Delphine; Peeters, Elisabeth; Mortier, Séverine Thérèse F C; Vercruysse, Jurgen; Gernaey, Krist V; Vervaet, Chris; Remon, Jean Paul; Nopens, Ingmar; De Beer, Thomas
2014-08-01
Classically, the end point detection during fluid bed drying has been performed using indirect parameters, such as the product temperature or the humidity of the outlet drying air. This paper aims at comparing those classic methods to both in-line moisture and solid-state determination by means of Process Analytical Technology (PAT) tools (Raman and NIR spectroscopy) and a mass balance approach. The six-segmented fluid bed drying system being part of a fully continuous from-powder-to-tablet production line (ConsiGma™-25) was used for this study. A theophylline:lactose:PVP (30:67.5:2.5) blend was chosen as model formulation. For the development of the NIR-based moisture determination model, 15 calibration experiments in the fluid bed dryer were performed. Six test experiments were conducted afterwards, and the product was monitored in-line with NIR and Raman spectroscopy during drying. The results (drying endpoint and residual moisture) obtained via the NIR-based moisture determination model, the classical approach by means of indirect parameters and the mass balance model were then compared. Our conclusion is that the PAT-based method is most suited for use in a production set-up. Secondly, the different size fractions of the dried granules obtained during different experiments (fines, yield and oversized granules) were compared separately, revealing differences in both solid state of theophylline and moisture content between the different granule size fractions. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mamehrashi, K.; Yousefi, S. A.
2017-02-01
This paper presents a numerical solution for solving a nonlinear 2-D optimal control problem (2DOP). The performance index of a nonlinear 2DOP is described with a state and a control function. Furthermore, dynamic constraint of the system is given by a classical diffusion equation. It is preferred to use the Ritz method for finding the numerical solution of the problem. The method is based upon the Legendre polynomial basis. By using this method, the given optimisation nonlinear 2DOP reduces to the problem of solving a system of algebraic equations. The benefit of the method is that it provides greater flexibility in which the given initial and boundary conditions of the problem are imposed. Moreover, compared with the eigenfunction method, the satisfactory results are obtained only in a small number of polynomials order. This numerical approach is applicable and effective for such a kind of nonlinear 2DOP. The convergence of the method is extensively discussed and finally two illustrative examples are included to observe the validity and applicability of the new technique developed in the current work.
Target space pseudoduality in supersymmetric sigma models on symmetric spaces
NASA Astrophysics Data System (ADS)
Sarisaman, Mustafa
We discuss the target space pseudoduality in supersymmetric sigma models on symmetric spaces. We first consider the case where sigma models based on real compact connected Lie groups of the same dimensionality and give examples using three dimensional models on target spaces. We show explicit construction of nonlocal conserved currents on the pseudodual manifold. We then switch the Lie group valued pseudoduality equations to Lie algebra valued ones, which leads to an infinite number of pseudoduality equations. We obtain an infinite number of conserved currents on the tangent bundle of the pseudo-dual manifold. Since pseudoduality imposes the condition that sigma models pseudodual to each other are based on symmetric spaces with opposite curvatures (i.e. dual symmetric spaces), we investigate pseudoduality transformation on the symmetric space sigma models in the third chapter. We see that there can be mixing of decomposed spaces with each other, which leads to mixings of the following expressions. We obtain the pseudodual conserved currents which are viewed as the orthonormal frame on the pullback bundle of the tangent space of G˜ which is the Lie group on which the pseudodual model based. Hence we obtain the mixing forms of curvature relations and one loop renormalization group beta function by means of these currents. In chapter four, we generalize the classical construction of pseudoduality transformation to supersymmetric case. We perform this both by component expansion method on manifold M and by orthonormal coframe method on manifold SO( M). The component method produces the result that pseudoduality transformation is not invertible at all points and occurs from all points on one manifold to only one point where riemann normal coordinates valid on the second manifold. Torsion of the sigma model on M must vanish while it is nonvanishing on M˜, and curvatures of the manifolds must be constant and the same because of anticommuting grassmann numbers. We obtain the similar results with the classical case in orthonormal coframe method. In case of super WZW sigma models pseudoduality equations result in three different pseudoduality conditions; flat space, chiral and antichiral pseudoduality. Finally we study the pseudoduality transformations on symmetric spaces using two different methods again. These two methods yield similar results to the classical cases with the exception that commuting bracket relations in classical case turns out to be anticommuting ones because of the appearance of grassmann numbers. It is understood that constraint relations in case of non-mixing pseudoduality are the remnants of mixing pseudoduality. Once mixing terms are included in the pseudoduality the constraint relations disappear.
ERIC Educational Resources Information Center
Matthews, Dorothy, Ed.
1979-01-01
The eight articles in this bulletin suggest methods of introducing classical literature into the English curriculum. Article titles are: "Ideas for Teaching Classical Mythology"; "What Novels Should High School Students Read?"; "Enlivening the Classics for Live Students"; "Poetry in Performance: The Value of Song and Oral Interpretation in…
Yang, Qing; Fan, Liu-Yin; Huang, Shan-Sheng; Zhang, Wei; Cao, Cheng-Xi
2011-04-01
In this paper, we developed a novel method of acid-base titration, viz. the electromigration acid-base titration (EABT), via a moving neutralization boundary (MNR). With HCl and NaOH as the model strong acid and base, respectively, we conducted the experiments on the EABT via the method of moving neutralization boundary for the first time. The experiments revealed that (i) the concentration of agarose gel, the voltage used and the content of background electrolyte (KCl) had evident influence on the boundary movement; (ii) the movement length was a function of the running time under the constant acid and base concentrations; and (iii) there was a good linearity between the length and natural logarithmic concentration of HCl under the optimized conditions, and the linearity could be used to detect the concentration of acid. The experiments further manifested that (i) the RSD values of intra-day and inter-day runs were less than 1.59 and 3.76%, respectively, indicating similar precision and stability in capillary electrophoresis or HPLC; (ii) the indicators with different pK(a) values had no obvious effect on EABT, distinguishing strong influence on the judgment of equivalence-point titration in the classic one; and (iii) the constant equivalence-point titration always existed in the EABT, rather than the classic volumetric analysis. Additionally, the EABT could be put to good use for the determination of actual acid concentrations. The experimental results achieved herein showed a new general guidance for the development of classic volumetric analysis and element (e.g. nitrogen) content analysis in protein chemistry. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Layout optimization with algebraic multigrid methods
NASA Technical Reports Server (NTRS)
Regler, Hans; Ruede, Ulrich
1993-01-01
Finding the optimal position for the individual cells (also called functional modules) on the chip surface is an important and difficult step in the design of integrated circuits. This paper deals with the problem of relative placement, that is the minimization of a quadratic functional with a large, sparse, positive definite system matrix. The basic optimization problem must be augmented by constraints to inhibit solutions where cells overlap. Besides classical iterative methods, based on conjugate gradients (CG), we show that algebraic multigrid methods (AMG) provide an interesting alternative. For moderately sized examples with about 10000 cells, AMG is already competitive with CG and is expected to be superior for larger problems. Besides the classical 'multiplicative' AMG algorithm where the levels are visited sequentially, we propose an 'additive' variant of AMG where levels may be treated in parallel and that is suitable as a preconditioner in the CG algorithm.
Gravitational forces and moments on spacecraft
NASA Technical Reports Server (NTRS)
Kane, T. R.; Likins, P. W.
1975-01-01
The solution of problems of attitude dynamics of spacecraft and the influence of gravitational forces and moments is examined. Arguments are presented based on Newton's law of gravitation, and employing the methods of Newtonian (vectorial) mechanics, with minimal recourse to the classical concepts of potential theory. The necessary ideas were developed and relationships were established to permit the representation of gravitational forces and moments exerted on bodies in space by other bodies, both in terms involving the mass distribution properties of the bodies, and in terms of vector operations on those scalar functions classically described as gravitational potential functions.
Lee, Sung Hak; Jung, Chan Kwon; Bae, Ja Seong; Jung, So Lyung; Choi, Yeong Jin; Kang, Chang Suk
2014-01-01
The tall cell variant (TCV) of papillary thyroid carcinoma (PTC) is the most common among the aggressive variants of the disease. We aimed to investigate the clinicopathologic characteristics of TCV, and evaluate the diagnostic efficacy of liquid-based cytology (LBC) in TCV detection compared with conventional smear in thyroid fine needle aspiration (FNA). A total of 266 consecutive patients (220 women and 46 men) with PTC were enrolled. We analyzed tumor characteristics according to histologic growth patterns as classic, classic PTC with tall cell features, and TCV. The cytomorphologic features of these subtypes were investigated according to the preparation methods of conventional smear and LBC. TCV and classic PTC with tall cell features comprised 4.9% and 6.0% of all tumors, respectively, and were significantly associated with older age at presentation, larger tumor size, high frequency of extrathyroid extension, and BRAF mutation in comparison with classic PTC. However, there was no statistically significant difference in clinicopathologic features between TCV and classic PTC with tall cell features. Tall cells were more easily detected by LBC than by conventional smear. The percentage of tall cells identified using LBC was well correlated with three histologic subtypes. Our results demonstrate that TCV is more common than previously recognized in Korea and any PTC containing tall cells may have identical biological behavior regardless of the precise proportions of tall cells. It is possible to make a preoperative diagnosis of TCV using LBC. Copyright © 2013 Wiley Periodicals, Inc.
On the Analysis of Multistep-Out-of-Grid Method for Celestial Mechanics Tasks
NASA Astrophysics Data System (ADS)
Olifer, L.; Choliy, V.
2016-09-01
Occasionally, there is a necessity in high-accurate prediction of celestial body trajectory. The most common way to do that is to solve Kepler's equation analytically or to use Runge-Kutta or Adams integrators to solve equation of motion numerically. For low-orbit satellites, there is a critical need in accounting geopotential and another forces which influence motion. As the result, the right side of equation of motion becomes much bigger, and classical integrators will not be quite effective. On the other hand, there is a multistep-out-of-grid (MOG) method which combines Runge-Kutta and Adams methods. The MOG method is based on using m on-grid values of the solution and n × m off-grid derivative estimations. Such method could provide stable integrators of maximum possible order, O (hm+mn+n-1). The main subject of this research was to implement and analyze the MOG method for solving satellite equation of motion with taking into account Earth geopotential model (ex. EGM2008 (Pavlis at al., 2008)) and with possibility to add other perturbations such as atmospheric drag or solar radiation pressure. Simulations were made for satellites on low orbit and with various eccentricities (from 0.1 to 0.9). Results of the MOG integrator were compared with results of Runge-Kutta and Adams integrators. It was shown that the MOG method has better accuracy than classical ones of the same order and less right-hand value estimations when is working on high orders. That gives it some advantage over "classical" methods.
Speckle: tool for diagnosis assistance
NASA Astrophysics Data System (ADS)
Carvalho, O.; Guyot, S.; Roy, L.; Benderitter, M.; Clairac, B.
2006-09-01
In this paper, we present a new approach of the speckle phenomenon. This method is based on the fractal Brownian motion theory and allows the extraction of three stochastic parameters to characterize the speckle pattern. For the first time, we present the results of this method applied to the discrimination of the healthy vs. pathologic skin. We also demonstrate, in case of the scleroderma, than this method is more accurate than the classical frequential approach.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.
Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol
Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. PMID:28399157
An Efficient Numerical Approach for Nonlinear Fokker-Planck equations
NASA Astrophysics Data System (ADS)
Otten, Dustin; Vedula, Prakash
2009-03-01
Fokker-Planck equations which are nonlinear with respect to their probability densities that occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, classical fermions and bosons can be challenging to solve numerically. To address some underlying challenges in obtaining numerical solutions, we propose a quadrature based moment method for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations. In this approach the distribution function is represented as a collection of Dirac delta functions with corresponding quadrature weights and locations, that are in turn determined from constraints based on evolution of generalized moments. Properties of the distribution function can be obtained by solution of transport equations for quadrature weights and locations. We will apply this computational approach to study a wide range of problems, including the Desai-Zwanzig Model (for nonlinear muscular contraction) and multivariate nonlinear Fokker-Planck equations describing classical fermions and bosons, and will also demonstrate good agreement with results obtained from Monte Carlo and other standard numerical methods.
Will the digital computer transform classical mathematics?
Rotman, Brian
2003-08-15
Mathematics and machines have influenced each other for millennia. The advent of the digital computer introduced a powerfully new element that promises to transform the relation between them. This paper outlines the thesis that the effect of the digital computer on mathematics, already widespread, is likely to be radical and far-reaching. To articulate this claim, an abstract model of doing mathematics is introduced based on a triad of actors of which one, the 'agent', corresponds to the function performed by the computer. The model is used to frame two sorts of transformation. The first is pragmatic and involves the alterations and progressive colonization of the content and methods of enquiry of various mathematical fields brought about by digital methods. The second is conceptual and concerns a fundamental antagonism between the infinity enshrined in classical mathematics and physics (continuity, real numbers, asymptotic definitions) and the inherently real and material limit of processes associated with digital computation. An example which lies in the intersection of classical mathematics and computer science, the P=NP problem, is analysed in the light of this latter issue.
Novel hyperspectral prediction method and apparatus
NASA Astrophysics Data System (ADS)
Kemeny, Gabor J.; Crothers, Natalie A.; Groth, Gard A.; Speck, Kathy A.; Marbach, Ralf
2009-05-01
Both the power and the challenge of hyperspectral technologies is the very large amount of data produced by spectral cameras. While off-line methodologies allow the collection of gigabytes of data, extended data analysis sessions are required to convert the data into useful information. In contrast, real-time monitoring, such as on-line process control, requires that compression of spectral data and analysis occur at a sustained full camera data rate. Efficient, high-speed practical methods for calibration and prediction are therefore sought to optimize the value of hyperspectral imaging. A novel method of matched filtering known as science based multivariate calibration (SBC) was developed for hyperspectral calibration. Classical (MLR) and inverse (PLS, PCR) methods are combined by spectroscopically measuring the spectral "signal" and by statistically estimating the spectral "noise." The accuracy of the inverse model is thus combined with the easy interpretability of the classical model. The SBC method is optimized for hyperspectral data in the Hyper-CalTM software used for the present work. The prediction algorithms can then be downloaded into a dedicated FPGA based High-Speed Prediction EngineTM module. Spectral pretreatments and calibration coefficients are stored on interchangeable SD memory cards, and predicted compositions are produced on a USB interface at real-time camera output rates. Applications include minerals, pharmaceuticals, food processing and remote sensing.
Rectennas at optical frequencies: How to analyze the response
NASA Astrophysics Data System (ADS)
Joshi, Saumil; Moddel, Garret
2015-08-01
Optical rectennas, antenna-coupled diode rectifiers that receive optical-frequency electromagnetic radiation and convert it to DC output, have been proposed for use in harvesting electromagnetic radiation from a blackbody source. The operation of these devices is qualitatively different from that of lower-frequency rectennas, and their design requires a new approach. To that end, we present a method to determine the rectenna response to high frequency illumination. It combines classical circuit analysis with classical and quantum-based photon-assisted tunneling response of a high-speed diode. We demonstrate the method by calculating the rectenna response for low and high frequency monochromatic illumination, and for radiation from a blackbody source. Such a blackbody source can be a hot body generating waste heat, or radiation from the sun.
Bukhvostov-Lipatov model and quantum-classical duality
NASA Astrophysics Data System (ADS)
Bazhanov, Vladimir V.; Lukyanov, Sergei L.; Runov, Boris A.
2018-02-01
The Bukhvostov-Lipatov model is an exactly soluble model of two interacting Dirac fermions in 1 + 1 dimensions. The model describes weakly interacting instantons and anti-instantons in the O (3) non-linear sigma model. In our previous work [arxiv:arXiv:1607.04839] we have proposed an exact formula for the vacuum energy of the Bukhvostov-Lipatov model in terms of special solutions of the classical sinh-Gordon equation, which can be viewed as an example of a remarkable duality between integrable quantum field theories and integrable classical field theories in two dimensions. Here we present a complete derivation of this duality based on the classical inverse scattering transform method, traditional Bethe ansatz techniques and analytic theory of ordinary differential equations. In particular, we show that the Bethe ansatz equations defining the vacuum state of the quantum theory also define connection coefficients of an auxiliary linear problem for the classical sinh-Gordon equation. Moreover, we also present details of the derivation of the non-linear integral equations determining the vacuum energy and other spectral characteristics of the model in the case when the vacuum state is filled by 2-string solutions of the Bethe ansatz equations.
Classical boson sampling algorithms with superior performance to near-term experiments
NASA Astrophysics Data System (ADS)
Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony
2017-12-01
It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.
Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations
NASA Astrophysics Data System (ADS)
Mitry, Mina
Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.
Multiple Active Contours Guided by Differential Evolution for Medical Image Segmentation
Cruz-Aceves, I.; Avina-Cervantes, J. G.; Lopez-Hernandez, J. M.; Rostro-Gonzalez, H.; Garcia-Capulin, C. H.; Torres-Cisneros, M.; Guzman-Cabrera, R.
2013-01-01
This paper presents a new image segmentation method based on multiple active contours guided by differential evolution, called MACDE. The segmentation method uses differential evolution over a polar coordinate system to increase the exploration and exploitation capabilities regarding the classical active contour model. To evaluate the performance of the proposed method, a set of synthetic images with complex objects, Gaussian noise, and deep concavities is introduced. Subsequently, MACDE is applied on datasets of sequential computed tomography and magnetic resonance images which contain the human heart and the human left ventricle, respectively. Finally, to obtain a quantitative and qualitative evaluation of the medical image segmentations compared to regions outlined by experts, a set of distance and similarity metrics has been adopted. According to the experimental results, MACDE outperforms the classical active contour model and the interactive Tseng method in terms of efficiency and robustness for obtaining the optimal control points and attains a high accuracy segmentation. PMID:23983809
Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm
NASA Astrophysics Data System (ADS)
Yu, Lifang; Zhao, Yao; Ni, Rongrong; Li, Ting
2010-12-01
We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.
ERIC Educational Resources Information Center
Caballero, Marcos D.; Doughty, Leanne; Turnbull, Anna M.; Pepper, Rachel E.; Pollock, Steven J.
2017-01-01
Reliable and validated assessments of introductory physics have been instrumental in driving curricular and pedagogical reforms that lead to improved student learning. As part of an effort to systematically improve our sophomore-level classical mechanics and math methods course (CM 1) at CU Boulder, we have developed a tool to assess student…
Crack image segmentation based on improved DBC method
NASA Astrophysics Data System (ADS)
Cao, Ting; Yang, Nan; Wang, Fengping; Gao, Ting; Wang, Weixing
2017-11-01
With the development of computer vision technology, crack detection based on digital image segmentation method arouses global attentions among researchers and transportation ministries. Since the crack always exhibits the random shape and complex texture, it is still a challenge to accomplish reliable crack detection results. Therefore, a novel crack image segmentation method based on fractal DBC (differential box counting) is introduced in this paper. The proposed method can estimate every pixel fractal feature based on neighborhood information which can consider the contribution from all possible direction in the related block. The block moves just one pixel every time so that it could cover all the pixels in the crack image. Unlike the classic DBC method which only describes fractal feature for the related region, this novel method can effectively achieve crack image segmentation according to the fractal feature of every pixel. The experiment proves the proposed method can achieve satisfactory results in crack detection.
Planar dielectric waveguides in rotation are optical fibers: comparison with the classical model.
Peña García, Antonio; Pérez-Ocón, Francisco; Jiménez, José Ramón
2008-01-21
A novel and simpler method to calculate the main parameters in fiber optics is presented. This method is based in a planar dielectric waveguide in rotation and, as an example, it is applied to calculate the turning points and the inner caustic in an optical fiber with a parabolic refractive index. It is shown that the solution found using this method agrees with the standard (and more complex) method, whose solutions for these points are also summarized in this paper.
Illumination invariant feature point matching for high-resolution planetary remote sensing images
NASA Astrophysics Data System (ADS)
Wu, Bo; Zeng, Hai; Hu, Han
2018-03-01
Despite its success with regular close-range and remote-sensing images, the scale-invariant feature transform (SIFT) algorithm is essentially not invariant to illumination differences due to the use of gradients for feature description. In planetary remote sensing imagery, which normally lacks sufficient textural information, salient regions are generally triggered by the shadow effects of keypoints, reducing the matching performance of classical SIFT. Based on the observation of dual peaks in a histogram of the dominant orientations of SIFT keypoints, this paper proposes an illumination-invariant SIFT matching method for high-resolution planetary remote sensing images. First, as the peaks in the orientation histogram are generally aligned closely with the sub-solar azimuth angle at the time of image collection, an adaptive suppression Gaussian function is tuned to level the histogram and thereby alleviate the differences in illumination caused by a changing solar angle. Next, the suppression function is incorporated into the original SIFT procedure for obtaining feature descriptors, which are used for initial image matching. Finally, as the distribution of feature descriptors changes after anisotropic suppression, and the ratio check used for matching and outlier removal in classical SIFT may produce inferior results, this paper proposes an improved matching procedure based on cross-checking and template image matching. The experimental results for several high-resolution remote sensing images from both the Moon and Mars, with illumination differences of 20°-180°, reveal that the proposed method retrieves about 40%-60% more matches than the classical SIFT method. The proposed method is of significance for matching or co-registration of planetary remote sensing images for their synergistic use in various applications. It also has the potential to be useful for flyby and rover images by integrating with the affine invariant feature detectors.
A Bayesian antedependence model for whole genome prediction.
Yang, Wenzhao; Tempelman, Robert J
2012-04-01
Hierarchical mixed effects models have been demonstrated to be powerful for predicting genomic merit of livestock and plants, on the basis of high-density single-nucleotide polymorphism (SNP) marker panels, and their use is being increasingly advocated for genomic predictions in human health. Two particularly popular approaches, labeled BayesA and BayesB, are based on specifying all SNP-associated effects to be independent of each other. BayesB extends BayesA by allowing a large proportion of SNP markers to be associated with null effects. We further extend these two models to specify SNP effects as being spatially correlated due to the chromosomally proximal effects of causal variants. These two models, that we respectively dub as ante-BayesA and ante-BayesB, are based on a first-order nonstationary antedependence specification between SNP effects. In a simulation study involving 20 replicate data sets, each analyzed at six different SNP marker densities with average LD levels ranging from r(2) = 0.15 to 0.31, the antedependence methods had significantly (P < 0.01) higher accuracies than their corresponding classical counterparts at higher LD levels (r(2) > 0. 24) with differences exceeding 3%. A cross-validation study was also conducted on the heterogeneous stock mice data resource (http://mus.well.ox.ac.uk/mouse/HS/) using 6-week body weights as the phenotype. The antedependence methods increased cross-validation prediction accuracies by up to 3.6% compared to their classical counterparts (P < 0.001). Finally, we applied our method to other benchmark data sets and demonstrated that the antedependence methods were more accurate than their classical counterparts for genomic predictions, even for individuals several generations beyond the training data.
Hybrid classical/quantum simulation for infrared spectroscopy of water
NASA Astrophysics Data System (ADS)
Maekawa, Yuki; Sasaoka, Kenji; Ube, Takuji; Ishiguro, Takashi; Yamamoto, Takahiro
2018-05-01
We have developed a hybrid classical/quantum simulation method to calculate the infrared (IR) spectrum of water. The proposed method achieves much higher accuracy than conventional classical molecular dynamics (MD) simulations at a much lower computational cost than ab initio MD simulations. The IR spectrum of water is obtained as an ensemble average of the eigenvalues of the dynamical matrix constructed by ab initio calculations, using the positions of oxygen atoms that constitute water molecules obtained from the classical MD simulation. The calculated IR spectrum is in excellent agreement with the experimental IR spectrum.
Alkaloid profiles of Mimosa tenuiflora and associated methods of analysis
USDA-ARS?s Scientific Manuscript database
The alkaloid contents of the leaves and seeds of M. tenuiflora collected from northeastern Brazil were studied. Alkaloids were isolated by classical acid/base extraction procedures and by cation exchange solid phase extraction. The crude alkaloid fractions were then analysed by thin layer chromatogr...
On Some Assumptions of the Null Hypothesis Statistical Testing
ERIC Educational Resources Information Center
Patriota, Alexandre Galvão
2017-01-01
Bayesian and classical statistical approaches are based on different types of logical principles. In order to avoid mistaken inferences and misguided interpretations, the practitioner must respect the inference rules embedded into each statistical method. Ignoring these principles leads to the paradoxical conclusions that the hypothesis…
Data Analysis Techniques for Physical Scientists
NASA Astrophysics Data System (ADS)
Pruneau, Claude A.
2017-10-01
Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.
A Gibbs sampler for Bayesian analysis of site-occupancy data
Dorazio, Robert M.; Rodriguez, Daniel Taylor
2012-01-01
1. A Bayesian analysis of site-occupancy data containing covariates of species occurrence and species detection probabilities is usually completed using Markov chain Monte Carlo methods in conjunction with software programs that can implement those methods for any statistical model, not just site-occupancy models. Although these software programs are quite flexible, considerable experience is often required to specify a model and to initialize the Markov chain so that summaries of the posterior distribution can be estimated efficiently and accurately. 2. As an alternative to these programs, we develop a Gibbs sampler for Bayesian analysis of site-occupancy data that include covariates of species occurrence and species detection probabilities. This Gibbs sampler is based on a class of site-occupancy models in which probabilities of species occurrence and detection are specified as probit-regression functions of site- and survey-specific covariate measurements. 3. To illustrate the Gibbs sampler, we analyse site-occupancy data of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly species in Switzerland. Our analysis includes a comparison of results based on Bayesian and classical (non-Bayesian) methods of inference. We also provide code (based on the R software program) for conducting Bayesian and classical analyses of site-occupancy data.
Keshavarzi, Sareh; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Pakfetrat, Maryam
2012-01-01
BACKGROUND. In many studies with longitudinal data, time-dependent covariates can only be measured intermittently (not at all observation times), and this presents difficulties for standard statistical analyses. This situation is common in medical studies, and methods that deal with this challenge would be useful. METHODS. In this study, we performed the seemingly unrelated regression (SUR) based models, with respect to each observation time in longitudinal data with intermittently observed time-dependent covariates and further compared these models with mixed-effect regression models (MRMs) under three classic imputation procedures. Simulation studies were performed to compare the sample size properties of the estimated coefficients for different modeling choices. RESULTS. In general, the proposed models in the presence of intermittently observed time-dependent covariates showed a good performance. However, when we considered only the observed values of the covariate without any imputations, the resulted biases were greater. The performances of the proposed SUR-based models in comparison with MRM using classic imputation methods were nearly similar with approximately equal amounts of bias and MSE. CONCLUSION. The simulation study suggests that the SUR-based models work as efficiently as MRM in the case of intermittently observed time-dependent covariates. Thus, it can be used as an alternative to MRM.
Cerezo, Javier; Aranda, Daniel; Avila Ferrer, Francisco J; Prampolini, Giacomo; Mazzeo, Giuseppe; Longhi, Giovanna; Abbate, Sergio; Santoro, Fabrizio
2018-06-01
We extend a recently proposed mixed quantum/classical method for computing the vibronic electronic circular dichroism (ECD) spectrum of molecules with different conformers, to cases where more than one hindered rotation is present. The method generalizes the standard procedure, based on the simple Boltzmann average of the vibronic spectra of the stable conformers, and includes the contribution of structures that sample all the accessible conformational space. It is applied to the simulation of the ECD spectrum of (S)-2,2,2-trifluoroanthrylethanol, a molecule with easily interconvertible conformers, whose spectrum exhibits a pattern of alternating positive and negative vibronic peaks. Results are in very good agreement with experiment and show that spectra averaged over all the sampled conformational space can deviate significantly from the simple average of the contributions of the stable conformers. The present mixed quantum/classical method is able to capture the effect of the nonlinear dependence of the rotatory strength on the molecular structure and of the anharmonic couplings among the modes responsible for molecular flexibility. Despite its computational cost, the procedure is still affordable and promises to be useful in all cases where the ECD shape arises from a subtle balance between vibronic effects and conformational variety. © 2018 Wiley Periodicals, Inc.
Toward simulating complex systems with quantum effects
NASA Astrophysics Data System (ADS)
Kenion-Hanrath, Rachel Lynn
Quantum effects like tunneling, coherence, and zero point energy often play a significant role in phenomena on the scales of atoms and molecules. However, the exact quantum treatment of a system scales exponentially with dimensionality, making it impractical for characterizing reaction rates and mechanisms in complex systems. An ongoing effort in the field of theoretical chemistry and physics is extending scalable, classical trajectory-based simulation methods capable of capturing quantum effects to describe dynamic processes in many-body systems; in the work presented here we explore two such techniques. First, we detail an explicit electron, path integral (PI)-based simulation protocol for predicting the rate of electron transfer in condensed-phase transition metal complex systems. Using a PI representation of the transferring electron and a classical representation of the transition metal complex and solvent atoms, we compute the outer sphere free energy barrier and dynamical recrossing factor of the electron transfer rate while accounting for quantum tunneling and zero point energy effects. We are able to achieve this employing only a single set of force field parameters to describe the system rather than parameterizing along the reaction coordinate. Following our success in describing a simple model system, we discuss our next steps in extending our protocol to technologically relevant materials systems. The latter half focuses on the Mixed Quantum-Classical Initial Value Representation (MQC-IVR) of real-time correlation functions, a semiclassical method which has demonstrated its ability to "tune'' between quantum- and classical-limit correlation functions while maintaining dynamic consistency. Specifically, this is achieved through a parameter that determines the quantumness of individual degrees of freedom. Here, we derive a semiclassical correction term for the MQC-IVR to systematically characterize the error introduced by different choices of simulation parameters, and demonstrate the ability of this approach to optimize MQC-IVR simulations.
PDT: special cases in front of legal regulations
NASA Astrophysics Data System (ADS)
Fischer, E.; Wegner, A.; Pfeiler, T.; Mertz, M.
2002-10-01
Introduction: The classic indication for photodynamic therapy (PDT) in ophthalmology is currently represented by classic subfoveal choroidal neovascularisation (CNV) due to age-related macular degeneration (AMD). PDT is a method, which almost selectively causes endothelial damage in neovascular lesions, followed by vascular occlusion and involution of the CNV. The mechanistic aspect suggests that non AMD-related choroidal neovascularisations might also benefit from PDT. PDT in AMD: Within the German health system, PDT indications follow the criteria based on the inclusion criteria of the TAP studies. For instance the CNV should be predominantly classic and located under the center of the foveal avascular zone. In the diagnosis and follow-up of exudative AMD, visual acuity measurements and fluorescein angiography are the established parameters. Retinal thickness analyzer (RTA) measurements might give further information. Before PDT, they show a significant retinal thickening due to intra- and subretinal exudation. Following PDT, early RTA follow-ups show a clear decrease in retinal thickening accompanies by increasing or stable acuity. PDT in CNV of other origins than AMD: New studies support a new spectrum of indications for PDT, hopefully leading to general cost reimbursement for patients. PDT should be viewed as a general method for vascular occlusion and does not represent a causal therapy for progressive exudative AMD. We present patients with CNV due to pathologic myopia, angioid streaks and POHS. Conclusion: The selective vascular occlusion caused by PDT, besides CNV associated with AMD and pathologic myopia, may also allow the treatment of choroidal neovascularisations based on other entities. Careful individual evaluation of those cases is recommended. Despite this wide array of possible indications, cost reimbursement has been limited to classic subfoveal CNV in AMD, although single case reimbursements in choroidal neovascular lesions due to pathologic myopia have been observed.
Soley, Micheline B; Markmann, Andreas; Batista, Victor S
2018-06-12
We introduce the so-called "Classical Optimal Control Optimization" (COCO) method for global energy minimization based on the implementation of the diffeomorphic modulation under observable-response-preserving homotopy (DMORPH) gradient algorithm. A probe particle with time-dependent mass m( t;β) and dipole μ( r, t;β) is evolved classically on the potential energy surface V( r) coupled to an electric field E( t;β), as described by the time-dependent density of states represented on a grid, or otherwise as a linear combination of Gaussians generated by the k-means clustering algorithm. Control parameters β defining m( t;β), μ( r, t;β), and E( t;β) are optimized by following the gradients of the energy with respect to β, adapting them to steer the particle toward the global minimum energy configuration. We find that the resulting COCO algorithm is capable of resolving near-degenerate states separated by large energy barriers and successfully locates the global minima of golf potentials on flat and rugged surfaces, previously explored for testing quantum annealing methodologies and the quantum optimal control optimization (QuOCO) method. Preliminary results show successful energy minimization of multidimensional Lennard-Jones clusters. Beyond the analysis of energy minimization in the specific model systems investigated, we anticipate COCO should be valuable for solving minimization problems in general, including optimization of parameters in applications to machine learning and molecular structure determination.
Predicting Protein-Protein Interaction Sites with a Novel Membership Based Fuzzy SVM Classifier.
Sriwastava, Brijesh K; Basu, Subhadip; Maulik, Ujjwal
2015-01-01
Predicting residues that participate in protein-protein interactions (PPI) helps to identify, which amino acids are located at the interface. In this paper, we show that the performance of the classical support vector machine (SVM) algorithm can further be improved with the use of a custom-designed fuzzy membership function, for the partner-specific PPI interface prediction problem. We evaluated the performances of both classical SVM and fuzzy SVM (F-SVM) on the PPI databases of three different model proteomes of Homo sapiens, Escherichia coli and Saccharomyces Cerevisiae and calculated the statistical significance of the developed F-SVM over classical SVM algorithm. We also compared our performance with the available state-of-the-art fuzzy methods in this domain and observed significant performance improvements. To predict interaction sites in protein complexes, local composition of amino acids together with their physico-chemical characteristics are used, where the F-SVM based prediction method exploits the membership function for each pair of sequence fragments. The average F-SVM performance (area under ROC curve) on the test samples in 10-fold cross validation experiment are measured as 77.07, 78.39, and 74.91 percent for the aforementioned organisms respectively. Performances on independent test sets are obtained as 72.09, 73.24 and 82.74 percent respectively. The software is available for free download from http://code.google.com/p/cmater-bioinfo.
A Parallel and Incremental Approach for Data-Intensive Learning of Bayesian Networks.
Yue, Kun; Fang, Qiyu; Wang, Xiaoling; Li, Jin; Liu, Weiyi
2015-12-01
Bayesian network (BN) has been adopted as the underlying model for representing and inferring uncertain knowledge. As the basis of realistic applications centered on probabilistic inferences, learning a BN from data is a critical subject of machine learning, artificial intelligence, and big data paradigms. Currently, it is necessary to extend the classical methods for learning BNs with respect to data-intensive computing or in cloud environments. In this paper, we propose a parallel and incremental approach for data-intensive learning of BNs from massive, distributed, and dynamically changing data by extending the classical scoring and search algorithm and using MapReduce. First, we adopt the minimum description length as the scoring metric and give the two-pass MapReduce-based algorithms for computing the required marginal probabilities and scoring the candidate graphical model from sample data. Then, we give the corresponding strategy for extending the classical hill-climbing algorithm to obtain the optimal structure, as well as that for storing a BN by
Codner, Gemma F; Lindner, Loic; Caulder, Adam; Wattenhofer-Donzé, Marie; Radage, Adam; Mertz, Annelyse; Eisenmann, Benjamin; Mianné, Joffrey; Evans, Edward P; Beechey, Colin V; Fray, Martin D; Birling, Marie-Christine; Hérault, Yann; Pavlovic, Guillaume; Teboul, Lydia
2016-08-05
Karyotypic integrity is essential for the successful germline transmission of alleles mutated in embryonic stem (ES) cells. Classical methods for the identification of aneuploidy involve cytological analyses that are both time consuming and require rare expertise to identify mouse chromosomes. As part of the International Mouse Phenotyping Consortium, we gathered data from over 1,500 ES cell clones and found that the germline transmission (GLT) efficiency of clones is compromised when over 50 % of cells harbour chromosome number abnormalities. In JM8 cells, chromosomes 1, 8, 11 or Y displayed copy number variation most frequently, whilst the remainder generally remain unchanged. We developed protocols employing droplet digital polymerase chain reaction (ddPCR) to accurately quantify the copy number of these four chromosomes, allowing efficient triage of ES clones prior to microinjection. We verified that assessments of aneuploidy, and thus decisions regarding the suitability of clones for microinjection, were concordant between classical cytological and ddPCR-based methods. Finally, we improved the method to include assay multiplexing so that two unstable chromosomes are counted simultaneously (and independently) in one reaction, to enhance throughput and further reduce the cost. We validated a PCR-based method as an alternative to classical karyotype analysis. This technique enables laboratories that are non-specialist, or work with large numbers of clones, to precisely screen ES cells for the most common aneuploidies prior to microinjection to ensure the highest level of germline transmission potential. The application of this method allows early exclusion of aneuploid ES cell clones in the ES cell to mouse conversion process, thus improving the chances of obtaining germline transmission and reducing the number of animals used in failed microinjection attempts. This method can be applied to any other experiments that require accurate analysis of the genome for copy number variation (CNV).
Modelling Of Flotation Processes By Classical Mathematical Methods - A Review
NASA Astrophysics Data System (ADS)
Jovanović, Ivana; Miljanović, Igor
2015-12-01
Flotation process modelling is not a simple task, mostly because of the process complexity, i.e. the presence of a large number of variables that (to a lesser or a greater extent) affect the final outcome of the mineral particles separation based on the differences in their surface properties. The attempts toward the development of the quantitative predictive model that would fully describe the operation of an industrial flotation plant started in the middle of past century and it lasts to this day. This paper gives a review of published research activities directed toward the development of flotation models based on the classical mathematical rules. The description and systematization of classical flotation models were performed according to the available references, with emphasize exclusively given to the flotation process modelling, regardless of the model application in a certain control system. In accordance with the contemporary considerations, models were classified as the empirical, probabilistic, kinetic and population balance types. Each model type is presented through the aspects of flotation modelling at the macro and micro process levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mastromatteo, Michael; Jackson, Bret, E-mail: jackson@chem.umass.edu
Electronic structure methods based on density functional theory are used to construct a reaction path Hamiltonian for CH{sub 4} dissociation on the Ni(100) and Ni(111) surfaces. Both quantum and quasi-classical trajectory approaches are used to compute dissociative sticking probabilities, including all molecular degrees of freedom and the effects of lattice motion. Both approaches show a large enhancement in sticking when the incident molecule is vibrationally excited, and both can reproduce the mode specificity observed in experiments. However, the quasi-classical calculations significantly overestimate the ground state dissociative sticking at all energies, and the magnitude of the enhancement in sticking with vibrationalmore » excitation is much smaller than that computed using the quantum approach or observed in the experiments. The origin of this behavior is an unphysical flow of zero point energy from the nine normal vibrational modes into the reaction coordinate, giving large values for reaction at energies below the activation energy. Perturbative assumptions made in the quantum studies are shown to be accurate at all energies studied.« less
NASA Astrophysics Data System (ADS)
Hooper, James; Ismail, Arif; Giorgi, Javier B.; Woo, Tom K.
2010-06-01
A genetic algorithm (GA)-inspired method to effectively map out low-energy configurations of doped metal oxide materials is presented. Specialized mating and mutation operations that do not alter the identity of the parent metal oxide have been incorporated to efficiently sample the metal dopant and oxygen vacancy sites. The search algorithms have been tested on lanthanide-doped ceria (L=Sm,Gd,Lu) with various dopant concentrations. Using both classical and first-principles density-functional-theory (DFT) potentials, we have shown the methodology reproduces the results of recent systematic searches of doped ceria at low concentrations (3.2% L2O3 ) and identifies low-energy structures of concentrated samarium-doped ceria (3.8% and 6.6% L2O3 ) which relate to the experimental and theoretical findings published thus far. We introduce a tandem classical/DFT GA algorithm in which an inexpensive classical potential is first used to generate a fit gene pool of structures to enhance the overall efficiency of the computationally demanding DFT-based GA search.
Classical Swine Fever-An Updated Review.
Blome, Sandra; Staubach, Christoph; Henke, Julia; Carlson, Jolene; Beer, Martin
2017-04-21
Classical swine fever (CSF) remains one of the most important transboundary viral diseases of swine worldwide. The causative agent is CSF virus, a small, enveloped RNA virus of the genus Pestivirus. Based on partial sequences, three genotypes can be distinguished that do not, however, directly correlate with virulence. Depending on both virus and host factors, a wide range of clinical syndromes can be observed and thus, laboratory confirmation is mandatory. To this means, both direct and indirect methods are utilized with an increasing degree of commercialization. Both infections in domestic pigs and wild boar are of great relevance; and wild boars are a reservoir host transmitting the virus sporadically also to pig farms. Control strategies for epidemic outbreaks in free countries are mainly based on classical intervention measures; i.e., quarantine and strict culling of affected herds. In these countries, vaccination is only an emergency option. However, live vaccines are used for controlling the disease in endemically infected regions in Asia, Eastern Europe, the Americas, and some African countries. Here, we will provide a concise, updated review on virus properties, clinical signs and pathology, epidemiology, pathogenesis and immune responses, diagnosis and vaccination possibilities.
Csf Based Non-Ground Points Extraction from LIDAR Data
NASA Astrophysics Data System (ADS)
Shen, A.; Zhang, W.; Shi, H.
2017-09-01
Region growing is a classical method of point cloud segmentation. Based on the idea of collecting the pixels with similar properties to form regions, region growing is widely used in many fields such as medicine, forestry and remote sensing. In this algorithm, there are two core problems. One is the selection of seed points, the other is the setting of the growth constraints, in which the selection of the seed points is the foundation. In this paper, we propose a CSF (Cloth Simulation Filtering) based method to extract the non-ground seed points effectively. The experiments have shown that this method can obtain a group of seed spots compared with the traditional methods. It is a new attempt to extract seed points
Stability of rigid rotors supported by air foil bearings: Comparison of two fundamental approaches
NASA Astrophysics Data System (ADS)
Larsen, Jon S.; Santos, Ilmar F.; von Osmanski, Sebastian
2016-10-01
High speed direct drive motors enable the use of Air Foil Bearings (AFB) in a wide range of applications due to the elimination of gear forces. Unfortunately, AFB supported rotors are lightly damped, and an accurate prediction of their Onset Speed of Instability (OSI) is therefore important. This paper compares two fundamental methods for predicting the OSI. One is based on a nonlinear time domain simulation and another is based on a linearised frequency domain method and a perturbation of the Reynolds equation. Both methods are based on equivalent models and should predict similar results. Significant discrepancies are observed leading to the question, is the classical frequency domain method sufficiently accurate? The discrepancies and possible explanations are discussed in detail.
Quantum-enhanced feature selection with forward selection and backward elimination
NASA Astrophysics Data System (ADS)
He, Zhimin; Li, Lvzhou; Huang, Zhiming; Situ, Haozhen
2018-07-01
Feature selection is a well-known preprocessing technique in machine learning, which can remove irrelevant features to improve the generalization capability of a classifier and reduce training and inference time. However, feature selection is time-consuming, particularly for the applications those have thousands of features, such as image retrieval, text mining and microarray data analysis. It is crucial to accelerate the feature selection process. We propose a quantum version of wrapper-based feature selection, which converts a classical feature selection to its quantum counterpart. It is valuable for machine learning on quantum computer. In this paper, we focus on two popular kinds of feature selection methods, i.e., wrapper-based forward selection and backward elimination. The proposed feature selection algorithm can quadratically accelerate the classical one.
Threshold quantum secret sharing based on single qubit
NASA Astrophysics Data System (ADS)
Lu, Changbin; Miao, Fuyou; Meng, Keju; Yu, Yue
2018-03-01
Based on unitary phase shift operation on single qubit in association with Shamir's ( t, n) secret sharing, a ( t, n) threshold quantum secret sharing scheme (or ( t, n)-QSS) is proposed to share both classical information and quantum states. The scheme uses decoy photons to prevent eavesdropping and employs the secret in Shamir's scheme as the private value to guarantee the correctness of secret reconstruction. Analyses show it is resistant to typical intercept-and-resend attack, entangle-and-measure attack and participant attacks such as entanglement swapping attack. Moreover, it is easier to realize in physic and more practical in applications when compared with related ones. By the method in our scheme, new ( t, n)-QSS schemes can be easily constructed using other classical ( t, n) secret sharing.
ERIC Educational Resources Information Center
Hester, Yvette
Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…
An opinion formation based binary optimization approach for feature selection
NASA Astrophysics Data System (ADS)
Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo
2018-02-01
This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Everett, W.R.; Rechnitz, G.A.
1999-01-01
A mini review of enzyme-based electrochemical biosensors for inhibition analysis of organophosphorus and carbamate pesticides is presented. Discussion includes the most recent literature to present advances in detection limits, selectivity and real sample analysis. Recent reviews on the monitoring of pesticides and their residues suggest that the classical analytical techniques of gas and liquid chromatography are the most widely used methods of detection. These techniques, although very accurate in their determinations, can be quite time consuming and expensive and usually require extensive sample clean up and pro-concentration. For these and many other reasons, the classical techniques are very difficult tomore » adapt for field use. Numerous researchers, in the past decade, have developed and made improvements on biosensors for use in pesticide analysis. This mini review will focus on recent advances made in enzyme-based electrochemical biosensors for the determinations of organophosphorus and carbamate pesticides.« less
Bidargaddi, Niranjan P; Chetty, Madhu; Kamruzzaman, Joarder
2008-06-01
Profile hidden Markov models (HMMs) based on classical HMMs have been widely applied for protein sequence identification. The formulation of the forward and backward variables in profile HMMs is made under statistical independence assumption of the probability theory. We propose a fuzzy profile HMM to overcome the limitations of that assumption and to achieve an improved alignment for protein sequences belonging to a given family. The proposed model fuzzifies the forward and backward variables by incorporating Sugeno fuzzy measures and Choquet integrals, thus further extends the generalized HMM. Based on the fuzzified forward and backward variables, we propose a fuzzy Baum-Welch parameter estimation algorithm for profiles. The strong correlations and the sequence preference involved in the protein structures make this fuzzy architecture based model as a suitable candidate for building profiles of a given family, since the fuzzy set can handle uncertainties better than classical methods.
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2016-10-01
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.
Comparison of Control Group Generating Methods.
Szekér, Szabolcs; Fogarassy, György; Vathy-Fogarassy, Ágnes
2017-01-01
Retrospective studies suffer from drawbacks such as selection bias. As the selection of the control group has a significant impact on the evaluation of the results, it is very important to find the proper method to generate the most appropriate control group. In this paper we suggest two nearest neighbors based control group selection methods that aim to achieve good matching between the individuals of case and control groups. The effectiveness of the proposed methods is evaluated by runtime and accuracy tests and the results are compared to the classical stratified sampling method.
A least-squares finite element method for incompressible Navier-Stokes problems
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan
1992-01-01
A least-squares finite element method, based on the velocity-pressure-vorticity formulation, is developed for solving steady incompressible Navier-Stokes problems. This method leads to a minimization problem rather than to a saddle-point problem by the classic mixed method and can thus accommodate equal-order interpolations. This method has no parameter to tune. The associated algebraic system is symmetric, and positive definite. Numerical results for the cavity flow at Reynolds number up to 10,000 and the backward-facing step flow at Reynolds number up to 900 are presented.
Borkowski, Andrzej; Owczarek-Wesołowska, Magdalena; Gromczak, Anna
2017-01-01
Terrestrial laser scanning is an efficient technique in providing highly accurate point clouds for various geoscience applications. The point clouds have to be transformed to a well-defined reference frame, such as the global Geodetic Reference System 1980. The transformation to the geocentric coordinate frame is based on estimating seven Helmert parameters using several GNSS (Global Navigation Satellite System) referencing points. This paper proposes a method for direct point cloud georeferencing that provides coordinates in the geocentric frame. The proposed method employs the vertical deflection from an external global Earth gravity model and thus demands a minimum number of GNSS measurements. The proposed method can be helpful when the number of georeferencing GNSS points is limited, for instance in city corridors. It needs only two georeferencing points. The validation of the method in a field test reveals that the differences between the classical georefencing and the proposed method amount at maximum to 7 mm with the standard deviation of 8 mm for all of three coordinate components. The proposed method may serve as an alternative for the laser scanning data georeferencing, especially when the number of GNSS points is insufficient for classical methods. PMID:28672795
Osada, Edward; Sośnica, Krzysztof; Borkowski, Andrzej; Owczarek-Wesołowska, Magdalena; Gromczak, Anna
2017-06-24
Terrestrial laser scanning is an efficient technique in providing highly accurate point clouds for various geoscience applications. The point clouds have to be transformed to a well-defined reference frame, such as the global Geodetic Reference System 1980. The transformation to the geocentric coordinate frame is based on estimating seven Helmert parameters using several GNSS (Global Navigation Satellite System) referencing points. This paper proposes a method for direct point cloud georeferencing that provides coordinates in the geocentric frame. The proposed method employs the vertical deflection from an external global Earth gravity model and thus demands a minimum number of GNSS measurements. The proposed method can be helpful when the number of georeferencing GNSS points is limited, for instance in city corridors. It needs only two georeferencing points. The validation of the method in a field test reveals that the differences between the classical georefencing and the proposed method amount at maximum to 7 mm with the standard deviation of 8 mm for all of three coordinate components. The proposed method may serve as an alternative for the laser scanning data georeferencing, especially when the number of GNSS points is insufficient for classical methods.
Complexometric Determination of Mercury Based on a Selective Masking Reaction
ERIC Educational Resources Information Center
Romero, Mercedes; Guidi, Veronica; Ibarrolaza, Agustin; Castells, Cecilia
2009-01-01
In the first analytical chemistry course, students are introduced to the concepts of equilibrium in water solutions and classical (non-instrumental) analytical methods. Our teaching experience shows that "real samples" stimulate students' enthusiasm for the laboratory work. From this diagnostic, we implemented an optional activity at the end of…
Inferential Procedures for Correlation Coefficients Corrected for Attenuation.
ERIC Educational Resources Information Center
Hakstian, A. Ralph; And Others
1988-01-01
A model and computation procedure based on classical test score theory are presented for determination of a correlation coefficient corrected for attenuation due to unreliability. Delta and Monte Carlo method applications are discussed. A power analysis revealed no serious loss in efficiency resulting from correction for attentuation. (TJH)
Equal Employment Legislation: Alternative Means of Compliance.
ERIC Educational Resources Information Center
Daum, Jeffrey W.
Alternative means of compliance available to organizations to bring their manpower uses into line with existing equal employment legislation are discussed in this paper. The first area addressed concerns the classical approach to selection and placement based on testing methods. The second area discussed reviews various nontesting techniques, such…
Great Performances: Creating Classroom-Based Assessment Tasks. Second Edition
ERIC Educational Resources Information Center
Shoemaker, Betty; Lewin, Larry
2011-01-01
Get an in-depth understanding of how to create fun, engaging, and challenging performance assessments that require students to elaborate on content and demonstrate mastery of skills. This update of an ASCD (Association for Supervision and Curriculum Development) classic includes new scoring methods, reading assessments, and insights on navigating…
Classical Trajectories and Quantum Spectra
NASA Technical Reports Server (NTRS)
Mielnik, Bogdan; Reyes, Marco A.
1996-01-01
A classical model of the Schrodinger's wave packet is considered. The problem of finding the energy levels corresponds to a classical manipulation game. It leads to an approximate but non-perturbative method of finding the eigenvalues, exploring the bifurcations of classical trajectories. The role of squeezing turns out decisive in the generation of the discrete spectra.
Advanced multispectral dynamic thermography as a new tool for inspection of gas-fired furnaces
NASA Astrophysics Data System (ADS)
Pregowski, Piotr; Goleniewski, Grzegorz; Komosa, Wojciech; Korytkowski, Waldemar
2004-04-01
The main special feature of elaborated method is that the dynamic IR thermography (DIRT) bases on forming of single image consisting of pixels of chosen minimum (IMAX) or maximum (IMAX) value, noted during adequately long sequence of thermograms with total independence to the moment of its (image's) capture. In this way, additive or suppressed interferences of fluctuating character become bypassed. Due to this method thereafter elaborated in classic way such "artificial thermogram" offers the quality impossible to achieve with a classic "one shot" method. Although preliminary, results obtained clearly show great potential of the method. and confirmed the validity in decreasing errors caused by fluctuating disturbances. In the case of process furnaces of gas-fired type and especially of coal-fired, application of presented solutions should result in significant increasing the reliability of IR thermography application. By use of properly chosen optical filters and algorithm, elaborated method offers a new potential attractive to test temperature problems other than in tubes , as for example symmetry and efficiency of the furnace heaters.
Quantum and quasi-classical collisional dynamics of O{sub 2}–Ar at high temperatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulusoy, Inga S.; Center for Computational and Molecular Science and Technology, School of Chemistry and Biochemistry, Georgia Institute of Technology, Atlanta, Georgia 30332-0400; Andrienko, Daniil A.
A hypersonic vehicle traveling at a high speed disrupts the distribution of internal states in the ambient flow and introduces a nonequilibrium distribution in the post-shock conditions. We investigate the vibrational relaxation in diatom-atom collisions in the range of temperatures between 1000 and 10 000 K by comparing results of extensive fully quantum-mechanical and quasi-classical simulations with available experimental data. The present paper simulates the interaction of molecular oxygen with argon as the first step in developing the aerothermodynamics models based on first principles. We devise a routine to standardize such calculations also for other scattering systems. Our results demonstrate verymore » good agreement of vibrational relaxation time, derived from quantum-mechanical calculations with the experimental measurements conducted in shock tube facilities. At the same time, the quasi-classical simulations fail to accurately predict rates of vibrationally inelastic transitions at temperatures lower than 3000 K. This observation and the computational cost of adopted methods suggest that the next generation of high fidelity thermochemical models should be a combination of quantum and quasi-classical approaches.« less
Quantum Metrology beyond the Classical Limit under the Effect of Dephasing
NASA Astrophysics Data System (ADS)
Matsuzaki, Yuichiro; Benjamin, Simon; Nakayama, Shojun; Saito, Shiro; Munro, William J.
2018-04-01
Quantum sensors have the potential to outperform their classical counterparts. For classical sensing, the uncertainty of the estimation of the target fields scales inversely with the square root of the measurement time T . On the other hand, by using quantum resources, we can reduce this scaling of the uncertainty with time to 1 /T . However, as quantum states are susceptible to dephasing, it has not been clear whether we can achieve sensitivities with a scaling of 1 /T for a measurement time longer than the coherence time. Here, we propose a scheme that estimates the amplitude of globally applied fields with the uncertainty of 1 /T for an arbitrary time scale under the effect of dephasing. We use one-way quantum-computing-based teleportation between qubits to prevent any increase in the correlation between the quantum state and its local environment from building up and have shown that such a teleportation protocol can suppress the local dephasing while the information from the target fields keeps growing. Our method has the potential to realize a quantum sensor with a sensitivity far beyond that of any classical sensor.
Quantum and quasi-classical collisional dynamics of O2-Ar at high temperatures
NASA Astrophysics Data System (ADS)
Ulusoy, Inga S.; Andrienko, Daniil A.; Boyd, Iain D.; Hernandez, Rigoberto
2016-06-01
A hypersonic vehicle traveling at a high speed disrupts the distribution of internal states in the ambient flow and introduces a nonequilibrium distribution in the post-shock conditions. We investigate the vibrational relaxation in diatom-atom collisions in the range of temperatures between 1000 and 10 000 K by comparing results of extensive fully quantum-mechanical and quasi-classical simulations with available experimental data. The present paper simulates the interaction of molecular oxygen with argon as the first step in developing the aerothermodynamics models based on first principles. We devise a routine to standardize such calculations also for other scattering systems. Our results demonstrate very good agreement of vibrational relaxation time, derived from quantum-mechanical calculations with the experimental measurements conducted in shock tube facilities. At the same time, the quasi-classical simulations fail to accurately predict rates of vibrationally inelastic transitions at temperatures lower than 3000 K. This observation and the computational cost of adopted methods suggest that the next generation of high fidelity thermochemical models should be a combination of quantum and quasi-classical approaches.
NASA Astrophysics Data System (ADS)
Bonhommeau, David; Truhlar, Donald G.
2008-07-01
The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode ν2 with n2=0,…,6 quanta of vibration) in the à electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTU /SD+trajectory projection onto ZPE orbit (TRAPZ) and FSTU /SD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH2 internal energy distributions obtained for n2=0 and n2>1, as observed in experiments. Distributions obtained for n2=1 present an intermediate behavior between distributions obtained for smaller and larger n2 values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH2 internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n2=0 and n2=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.
Bonhommeau, David; Truhlar, Donald G
2008-07-07
The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode nu(2) with n(2)=0,[ellipsis (horizontal)],6 quanta of vibration) in the A electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTUSD+trajectory projection onto ZPE orbit (TRAPZ) and FSTUSD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH(2) internal energy distributions obtained for n(2)=0 and n(2)>1, as observed in experiments. Distributions obtained for n(2)=1 present an intermediate behavior between distributions obtained for smaller and larger n(2) values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH(2) internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n(2)=0 and n(2)=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.
Long, Lijun; Zhao, Jun
2017-07-01
In this paper, the problem of adaptive neural output-feedback control is addressed for a class of multi-input multioutput (MIMO) switched uncertain nonlinear systems with unknown control gains. Neural networks (NNs) are used to approximate unknown nonlinear functions. In order to avoid the conservativeness caused by adoption of a common observer for all subsystems, an MIMO NN switched observer is designed to estimate unmeasurable states. A new switched observer-based adaptive neural control technique for the problem studied is then provided by exploiting the classical average dwell time (ADT) method and the backstepping method and the Nussbaum gain technique. It effectively handles the obstacle about the coexistence of multiple Nussbaum-type function terms, and improves the classical ADT method, since the exponential decline property of Lyapunov functions for individual subsystems is no longer satisfied. It is shown that the technique proposed is able to guarantee semiglobal uniformly ultimately boundedness of all the signals in the closed-loop system under a class of switching signals with ADT, and the tracking errors converge to a small neighborhood of the origin. The effectiveness of the approach proposed is illustrated by its application to a two inverted pendulum system.
Complex network approach to classifying classical piano compositions
NASA Astrophysics Data System (ADS)
Xin, Chen; Zhang, Huishu; Huang, Jiping
2016-10-01
Complex network has been regarded as a useful tool handling systems with vague interactions. Hence, numerous applications have arised. In this paper we construct complex networks for 770 classical piano compositions of Mozart, Beethoven and Chopin based on musical note pitches and lengths. We find prominent distinctions among network edges of different composers. Some stylized facts can be explained by such parameters of network structures and topologies. Further, we propose two classification methods for music styles and genres according to the discovered distinctions. These methods are easy to implement and the results are sound. This work suggests that complex network could be a decent way to analyze the characteristics of musical notes, since it could provide a deep view into understanding of the relationships among notes in musical compositions and evidence for classification of different composers, styles and genres of music.
Isogeometric analysis and harmonic stator-rotor coupling for simulating electric machines
NASA Astrophysics Data System (ADS)
Bontinck, Zeger; Corno, Jacopo; Schöps, Sebastian; De Gersem, Herbert
2018-06-01
This work proposes Isogeometric Analysis as an alternative to classical finite elements for simulating electric machines. Through the spline-based Isogeometric discretization it is possible to parametrize the circular arcs exactly, thereby avoiding any geometrical error in the representation of the air gap where a high accuracy is mandatory. To increase the generality of the method, and to allow rotation, the rotor and the stator computational domains are constructed independently as multipatch entities. The two subdomains are then coupled using harmonic basis functions at the interface which gives rise to a saddle-point problem. The properties of Isogeometric Analysis combined with harmonic stator-rotor coupling are presented. The results and performance of the new approach are compared to the ones for a classical finite element method using a permanent magnet synchronous machine as an example.
The coordinate-based meta-analysis of neuroimaging data.
Samartsidis, Pantelis; Montagna, Silvia; Nichols, Thomas E; Johnson, Timothy D
2017-01-01
Neuroimaging meta-analysis is an area of growing interest in statistics. The special characteristics of neuroimaging data render classical meta-analysis methods inapplicable and therefore new methods have been developed. We review existing methodologies, explaining the benefits and drawbacks of each. A demonstration on a real dataset of emotion studies is included. We discuss some still-open problems in the field to highlight the need for future research.
The coordinate-based meta-analysis of neuroimaging data
Samartsidis, Pantelis; Montagna, Silvia; Nichols, Thomas E.; Johnson, Timothy D.
2017-01-01
Neuroimaging meta-analysis is an area of growing interest in statistics. The special characteristics of neuroimaging data render classical meta-analysis methods inapplicable and therefore new methods have been developed. We review existing methodologies, explaining the benefits and drawbacks of each. A demonstration on a real dataset of emotion studies is included. We discuss some still-open problems in the field to highlight the need for future research. PMID:29545671
Using CAS to Solve Classical Mathematics Problems
ERIC Educational Resources Information Center
Burke, Maurice J.; Burroughs, Elizabeth A.
2009-01-01
Historically, calculus has displaced many algebraic methods for solving classical problems. This article illustrates an algebraic method for finding the zeros of polynomial functions that is closely related to Newton's method (devised in 1669, published in 1711), which is encountered in calculus. By exploring this problem, precalculus students…
Eigensystem analysis of classical relaxation techniques with applications to multigrid analysis
NASA Technical Reports Server (NTRS)
Lomax, Harvard; Maksymiuk, Catherine
1987-01-01
Classical relaxation techniques are related to numerical methods for solution of ordinary differential equations. Eigensystems for Point-Jacobi, Gauss-Seidel, and SOR methods are presented. Solution techniques such as eigenvector annihilation, eigensystem mixing, and multigrid methods are examined with regard to the eigenstructure.
Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics
ERIC Educational Resources Information Center
Schlitt, D. W.
1977-01-01
Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)
NASA Astrophysics Data System (ADS)
Pascal, Christophe
2004-04-01
Stress inversion programs are nowadays frequently used in tectonic analysis. The purpose of this family of programs is to reconstruct the stress tensor characteristics from fault slip data acquired in the field or derived from earthquake focal mechanisms (i.e. inverse methods). Until now, little attention has been paid to direct methods (i.e. to determine fault slip directions from an inferred stress tensor). During the 1990s, the fast increase in resolution in 3D seismic reflection techniques made it possible to determine the geometry of subsurface faults with a satisfactory accuracy but not to determine precisely their kinematics. This recent improvement allows the use of direct methods. A computer program, namely SORTAN, is introduced. The program is highly portable on Unix platforms, straightforward to install and user-friendly. The computation is based on classical stress-fault slip relationships and allows for fast treatment of a set of faults and graphical presentation of the results (i.e. slip directions). In addition, the SORTAN program permits one to test the sensitivity of the results to input uncertainties. It is a complementary tool to classical stress inversion methods and can be used to check the mechanical consistency and the limits of structural interpretations based upon 3D seismic reflection surveys.
Stachelska, M A
2017-09-26
The aim of the present study was to establish a rapid and accurate real-time PCR method to detect pathogenic Yersinia enterocolitica in pork. Yersinia enterocolitica is considered to be a crucial zoonosis, which can provoke diseases both in humans and animals. The classical culture methods designated to detect Y. enterocolitica species in food matrices are often very time-consuming. The chromosomal locus _tag CH49_3099 gene, that appears in pathogenic Y. enterocolitica strains, was applied as DNA target for the 5' nuclease PCR protocol. The probe was labelled at the 5' end with the fluorescent reporter dye (FAM) and at the 3' end with the quencher dye (TAMRA). The real-time PCR cycling parameters included 41 cycles. A Ct value which reached a value higher than 40 constituted a negative result. The developed for the needs of this study qualitative real-time PCR method appeared to give very specific and reliable results. The detection rate of locus _tag CH49_3099 - positive Y. enterocolitica in 150 pig tonsils was 85 % and 32 % with PCR and culture methods, respectively. Both the Real-time PCR results and culture method results were obtained from material that was enriched during overnight incubation. The subject of the study were also raw pork meat samples. Among 80 samples examined, 7 ones were positive when real-time PCR was applied, and 6 ones were positive when classical culture method was applied. The application of molecular techniques based on the analysis of DNA sequences such as the Real-time PCR enables to detect this pathogenic bacteria very rapidly and with higher specificity, sensitivity and reliability in comparison to classical culture methods.
Maggi, Maristella; Scotti, Claudia
2017-08-01
Single domain antibodies (sdAbs) are small antigen-binding domains derived from naturally occurring, heavy chain-only immunoglobulins isolated from camelid and sharks. They maintain the same binding capability of full-length IgGs but with improved thermal stability and permeability, which justifies their scientific, medical and industrial interest. Several described recombinant forms of sdAbs have been produced in different hosts and with different strategies. Here we present an optimized method for a time-saving, high yield production and extraction of a poly-histidine-tagged sdAb from Escherichia coli classical inclusion bodies. Protein expression and extraction were attempted using 4 different methods (e.g. autoinducing or IPTG-induced soluble expression, non-classical and classical inclusion bodies). The best method resulted to be expression in classical inclusion bodies and urea-mediated protein extraction which yielded 60-70 mg/l bacterial culture. The method we here describe can be of general interest for an enhanced and efficient heterologous expression of sdAbs for research and industrial purposes. Copyright © 2017 Elsevier Inc. All rights reserved.
Paule‐Mandel estimators for network meta‐analysis with random inconsistency effects
Veroniki, Areti Angeliki; Law, Martin; Tricco, Andrea C.; Baker, Rose
2017-01-01
Network meta‐analysis is used to simultaneously compare multiple treatments in a single analysis. However, network meta‐analyses may exhibit inconsistency, where direct and different forms of indirect evidence are not in agreement with each other, even after allowing for between‐study heterogeneity. Models for network meta‐analysis with random inconsistency effects have the dual aim of allowing for inconsistencies and estimating average treatment effects across the whole network. To date, two classical estimation methods for fitting this type of model have been developed: a method of moments that extends DerSimonian and Laird's univariate method and maximum likelihood estimation. However, the Paule and Mandel estimator is another recommended classical estimation method for univariate meta‐analysis. In this paper, we extend the Paule and Mandel method so that it can be used to fit models for network meta‐analysis with random inconsistency effects. We apply all three estimation methods to a variety of examples that have been used previously and we also examine a challenging new dataset that is highly heterogenous. We perform a simulation study based on this new example. We find that the proposed Paule and Mandel method performs satisfactorily and generally better than the previously proposed method of moments because it provides more accurate inferences. Furthermore, the Paule and Mandel method possesses some advantages over likelihood‐based methods because it is both semiparametric and requires no convergence diagnostics. Although restricted maximum likelihood estimation remains the gold standard, the proposed methodology is a fully viable alternative to this and other estimation methods. PMID:28585257
Brown, Gary C.; Brown, Melissa M.; Brown, Heidi C.; Kindermann, Sylvia; Sharma, Sanjay
2007-01-01
Purpose To evaluate the comparability of articles in the peer-reviewed literature assessing the (1) patient value and (2) cost-utility (cost-effectiveness) associated with interventions for neovascular age-related macular degeneration (ARMD). Methods A search was performed in the National Library of Medicine database of 16 million peer-reviewed articles using the key words cost-utility, cost-effectiveness, value, verteporfin, pegaptanib, laser photocoagulation, ranibizumab, and therapy. All articles that used an outcome of quality-adjusted life-years (QALYs) were studied in regard to (1) percent improvement in quality of life, (2) utility methodology, (3) utility respondents, (4) types of costs included (eg, direct healthcare, direct nonhealthcare, indirect), (5) cost bases (eg, Medicare, National Health Service in the United Kingdom), and (6) study cost perspective (eg, government, societal, third-party insurer). To qualify as a value-based medicine analysis, the patient value had to be measured using the outcome of the QALYs conferred by respective interventions. As with value-based medicine analyses, patient-based time tradeoff utility analysis had to be utilized, patient utility respondents were necessary, and direct medical costs were used. Results Among 21 cost-utility analyses performed on interventions for neovascular macular degeneration, 15 (71%) met value-based medicine criteria. The 6 others (29%) were not comparable owing to (1) varying utility methodology, (2) varying utility respondents, (3) differing costs utilized, (4) differing cost bases, and (5) varying study perspectives. Among value-based medicine studies, laser photocoagulation confers a 4.4% value gain (improvement in quality of life) for the treatment of classic subfoveal choroidal neovascularization. Intravitreal pegaptanib confers a 5.9% value gain (improvement in quality of life) for classic, minimally classic, and occult subfoveal choroidal neovascularization, and photodynamic therapy with verteporfin confers a 7.8% to 10.7% value gain for the treatment of classic subfoveal choroidal neovascularization. Intravitreal ranibizumab therapy confers greater than a 15% value gain for the treatment of subfoveal occult and minimally classic subfoveal choroidal neovascularization. Conclusions The majority of cost-utility studies performed on interventions for neovascular macular degeneration are value-based medicine studies and thus are comparable. Value-based analyses of neovascular ARMD monotherapies demonstrate the power of value-based medicine to improve quality of care and concurrently maximize the efficacy of healthcare resource use in public policy. The comparability of value-based medicine cost-utility analyses has important implications for overall practice standards and public policy. The adoption of value-based medicine standards can greatly facilitate the goal of higher-quality care and maximize the best use of healthcare funds. PMID:18427606
Zhou, Yan; Cao, Hui
2013-01-01
We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
NASA Astrophysics Data System (ADS)
Gerstmayr, Johannes; Irschik, Hans
2008-12-01
In finite element methods that are based on position and slope coordinates, a representation of axial and bending deformation by means of an elastic line approach has become popular. Such beam and plate formulations based on the so-called absolute nodal coordinate formulation have not yet been verified sufficiently enough with respect to analytical results or classical nonlinear rod theories. Examining the existing planar absolute nodal coordinate element, which uses a curvature proportional bending strain expression, it turns out that the deformation does not fully agree with the solution of the geometrically exact theory and, even more serious, the normal force is incorrect. A correction based on the classical ideas of the extensible elastica and geometrically exact theories is applied and a consistent strain energy and bending moment relations are derived. The strain energy of the solid finite element formulation of the absolute nodal coordinate beam is based on the St. Venant-Kirchhoff material: therefore, the strain energy is derived for the latter case and compared to classical nonlinear rod theories. The error in the original absolute nodal coordinate formulation is documented by numerical examples. The numerical example of a large deformation cantilever beam shows that the normal force is incorrect when using the previous approach, while a perfect agreement between the absolute nodal coordinate formulation and the extensible elastica can be gained when applying the proposed modifications. The numerical examples show a very good agreement of reference analytical and numerical solutions with the solutions of the proposed beam formulation for the case of large deformation pre-curved static and dynamic problems, including buckling and eigenvalue analysis. The resulting beam formulation does not employ rotational degrees of freedom and therefore has advantages compared to classical beam elements regarding energy-momentum conservation.
Silva, George; Poirot, Laurent; Galetto, Roman; Smith, Julianne; Montoya, Guillermo; Duchateau, Philippe; Pâques, Frédéric
2011-01-01
The importance of safer approaches for gene therapy has been underscored by a series of severe adverse events (SAEs) observed in patients involved in clinical trials for Severe Combined Immune Deficiency Disease (SCID) and Chromic Granulomatous Disease (CGD). While a new generation of viral vectors is in the process of replacing the classical gamma-retrovirus–based approach, a number of strategies have emerged based on non-viral vectorization and/or targeted insertion aimed at achieving safer gene transfer. Currently, these methods display lower efficacies than viral transduction although many of them can yield more than 1% engineered cells in vitro. Nuclease-based approaches, wherein an endonuclease is used to trigger site-specific genome editing, can significantly increase the percentage of targeted cells. These methods therefore provide a real alternative to classical gene transfer as well as gene editing. However, the first endonuclease to be in clinic today is not used for gene transfer, but to inactivate a gene (CCR5) required for HIV infection. Here, we review these alternative approaches, with a special emphasis on meganucleases, a family of naturally occurring rare-cutting endonucleases, and speculate on their current and future potential. PMID:21182466
A online credit evaluation method based on AHP and SPA
NASA Astrophysics Data System (ADS)
Xu, Yingtao; Zhang, Ying
2009-07-01
Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.
Fourier analysis and signal processing by use of the Moebius inversion formula
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Yu, Xiaoli; Shih, Ming-Tang; Tufts, Donald W.; Truong, T. K.
1990-01-01
A novel Fourier technique for digital signal processing is developed. This approach to Fourier analysis is based on the number-theoretic method of the Moebius inversion of series. The Fourier transform method developed is shown also to yield the convolution of two signals. A computer simulation shows that this method for finding Fourier coefficients is quite suitable for digital signal processing. It competes with the classical FFT (fast Fourier transform) approach in terms of accuracy, complexity, and speed.
Chen, Mohan; Vella, Joseph R.; Panagiotopoulos, Athanassios Z.; ...
2015-04-08
The structure and dynamics of liquid lithium are studied using two simulation methods: orbital-free (OF) first-principles molecular dynamics (MD), which employs OF density functional theory (DFT), and classical MD utilizing a second nearest-neighbor embedded-atom method potential. The properties we studied include the dynamic structure factor, the self-diffusion coefficient, the dispersion relation, the viscosity, and the bond angle distribution function. Our simulation results were compared to available experimental data when possible. Each method has distinct advantages and disadvantages. For example, OFDFT gives better agreement with experimental dynamic structure factors, yet is more computationally demanding than classical simulations. Classical simulations can accessmore » a broader temperature range and longer time scales. The combination of first-principles and classical simulations is a powerful tool for studying properties of liquid lithium.« less
Nechansky, A; Szolar, O H J; Siegl, P; Zinoecker, I; Halanek, N; Wiederkum, S; Kircheis, R
2009-05-01
The fully humanized Lewis-Y carbohydrate specific monoclonal antibody (mAb) IGN311 is currently tested in a passive immunotherapy approach in a clinical phase I trail and therefore regulatory requirements demand qualified assays for product analysis. To demonstrate the functionality of its Fc-region, the capacity of IGN311 to mediate complement dependent cytotoxicity (CDC) against human breast cancer cells was evaluated. The "classical" radioactive method using chromium-51 and a FACS-based assay were established and qualified according to ICH guidelines. Parameters evaluated were specificity, response function, bias, repeatability (intra-day precision), intermediate precision (operator-time different), and linearity (assay range). In the course of a fully nested design, a four-parameter logistic equation was identified as appropriate calibration model for both methods. For the radioactive assay, the bias ranged from -6.1% to -3.6%. The intermediate precision for future means of duplicate measurements revealed values from 12.5% to 15.9% and the total error (beta-expectation tolerance interval) of the method was found to be <40%. For the FACS-based assay, the bias ranged from -8.3% to 0.6% and the intermediate precision for future means of duplicate measurements revealed values from 4.2% to 8.0%. The total error of the method was found to be <25%. The presented data demonstrate that the FACS-based CDC is more accurate than the radioactive assay. Also, the elimination of radioactivity and the 'real-time' counting of apoptotic cells further justifies the implementation of this method which was subsequently applied for testing the influence of storage at 4 degrees C and 25 degrees C ('stability testing') on the potency of IGN311 drug product. The obtained results demonstrate that the qualified functional assay represents a stability indicating test method.
Atomic-Scale Lightning Rod Effect in Plasmonic Picocavities: A Classical View to a Quantum Effect.
Urbieta, Mattin; Barbry, Marc; Zhang, Yao; Koval, Peter; Sánchez-Portal, Daniel; Zabala, Nerea; Aizpurua, Javier
2018-01-23
Plasmonic gaps are known to produce nanoscale localization and enhancement of optical fields, providing small effective mode volumes of about a few hundred nm 3 . Atomistic quantum calculations based on time-dependent density functional theory reveal the effect of subnanometric localization of electromagnetic fields due to the presence of atomic-scale features at the interfaces of plasmonic gaps. Using a classical model, we explain this as a nonresonant lightning rod effect at the atomic scale that produces an extra enhancement over that of the plasmonic background. The near-field distribution of atomic-scale hot spots around atomic features is robust against dynamical screening and spill-out effects and follows the potential landscape determined by the electron density around the atomic sites. A detailed comparison of the field distribution around atomic hot spots from full quantum atomistic calculations and from the local classical approach considering the geometrical profile of the atoms' electronic density validates the use of a classical framework to determine the effective mode volume in these extreme subnanometric optical cavities. This finding is of practical importance for the community of surface-enhanced molecular spectroscopy and quantum nanophotonics, as it provides an adequate description of the local electromagnetic fields around atomic-scale features with use of simplified classical methods.
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
Chen, Zhong; Liu, June; Li, Xiong
2017-01-01
A two-stage artificial neural network (ANN) based on scalarization method is proposed for bilevel biobjective programming problem (BLBOP). The induced set of the BLBOP is firstly expressed as the set of minimal solutions of a biobjective optimization problem by using scalar approach, and then the whole efficient set of the BLBOP is derived by the proposed two-stage ANN for exploring the induced set. In order to illustrate the proposed method, seven numerical examples are tested and compared with results in the classical literature. Finally, a practical problem is solved by the proposed algorithm. PMID:29312446
NASA Technical Reports Server (NTRS)
Franca, Leopoldo P.; Loula, Abimael F. D.; Hughes, Thomas J. R.; Miranda, Isidoro
1989-01-01
Adding to the classical Hellinger-Reissner formulation, a residual form of the equilibrium equation, a new Galerkin/least-squares finite element method is derived. It fits within the framework of a mixed finite element method and is stable for rather general combinations of stress and velocity interpolations, including equal-order discontinuous stress and continuous velocity interpolations which are unstable within the Galerkin approach. Error estimates are presented based on a generalization of the Babuska-Brezzi theory. Numerical results (not presented herein) have confirmed these estimates as well as the good accuracy and stability of the method.
Methodologies for Salmonella enterica subsp. enterica Subtyping: Gold Standards and Alternatives▿
Wattiau, Pierre; Boland, Cécile; Bertrand, Sophie
2011-01-01
For more than 80 years, subtyping of Salmonella enterica has been routinely performed by serotyping, a method in which surface antigens are identified based on agglutination reactions with specific antibodies. The serotyping scheme, which is continuously updated as new serovars are discovered, has generated over time a data set of the utmost significance, allowing long-term epidemiological surveillance of Salmonella in the food chain and in public health control. Conceptually, serotyping provides no information regarding the phyletic relationships inside the different Salmonella enterica subspecies. In epidemiological investigations, identification and tracking of salmonellosis outbreaks require the use of methods that can fingerprint the causative strains at a taxonomic level far more specific than the one achieved by serotyping. During the last 2 decades, alternative methods that could successfully identify the serovar of a given strain by probing its DNA have emerged, and molecular biology-based methods have been made available to address phylogeny and fingerprinting issues. At the same time, accredited diagnostics have become increasingly generalized, imposing stringent methodological requirements in terms of traceability and measurability. In these new contexts, the hand-crafted character of classical serotyping is being challenged, although it is widely accepted that classification into serovars should be maintained. This review summarizes and discusses modern typing methods, with a particular focus on those having potential as alternatives for classical serotyping or for subtyping Salmonella strains at a deeper level. PMID:21856826
Armstrong, M Stuart; Finn, Paul W; Morris, Garrett M; Richards, W Graham
2011-08-01
In a previous paper, we presented the ElectroShape method, which we used to achieve successful ligand-based virtual screening. It extended classical shape-based methods by applying them to the four-dimensional shape of the molecule where partial charge was used as the fourth dimension to capture electrostatic information. This paper extends the approach by using atomic lipophilicity (alogP) as an additional molecular property and validates it using the improved release 2 of the Directory of Useful Decoys (DUD). When alogP replaced partial charge, the enrichment results were slightly below those of ElectroShape, though still far better than purely shape-based methods. However, when alogP was added as a complement to partial charge, the resulting five-dimensional enrichments shows a clear improvement in performance. This demonstrates the utility of extending the ElectroShape virtual screening method by adding other atom-based descriptors.
Capomaccio, Stefano; Milanesi, Marco; Bomba, Lorenzo; Cappelli, Katia; Nicolazzi, Ezequiel L; Williams, John L; Ajmone-Marsan, Paolo; Stefanon, Bruno
2015-08-01
Genome-wide association studies (GWAS) have been widely applied to disentangle the genetic basis of complex traits. In cattle breeds, classical GWAS approaches with medium-density marker panels are far from conclusive, especially for complex traits. This is due to the intrinsic limitations of GWAS and the assumptions that are made to step from the association signals to the functional variations. Here, we applied a gene-based strategy to prioritize genotype-phenotype associations found for milk production and quality traits with classical approaches in three Italian dairy cattle breeds with different sample sizes (Italian Brown n = 745; Italian Holstein n = 2058; Italian Simmental n = 477). Although classical regression on single markers revealed only a single genome-wide significant genotype-phenotype association, for Italian Holstein, the gene-based approach identified specific genes in each breed that are associated with milk physiology and mammary gland development. As no standard method has yet been established to step from variation to functional units (i.e., genes), the strategy proposed here may contribute to revealing new genes that play significant roles in complex traits, such as those investigated here, amplifying low association signals using a gene-centric approach. © 2015 Stichting International Foundation for Animal Genetics.
Sumner, Isaiah; Iyengar, Srinivasan S
2007-10-18
We have introduced a computational methodology to study vibrational spectroscopy in clusters inclusive of critical nuclear quantum effects. This approach is based on the recently developed quantum wavepacket ab initio molecular dynamics method that combines quantum wavepacket dynamics with ab initio molecular dynamics. The computational efficiency of the dynamical procedure is drastically improved (by several orders of magnitude) through the utilization of wavelet-based techniques combined with the previously introduced time-dependent deterministic sampling procedure measure to achieve stable, picosecond length, quantum-classical dynamics of electrons and nuclei in clusters. The dynamical information is employed to construct a novel cumulative flux/velocity correlation function, where the wavepacket flux from the quantized particle is combined with classical nuclear velocities to obtain the vibrational density of states. The approach is demonstrated by computing the vibrational density of states of [Cl-H-Cl]-, inclusive of critical quantum nuclear effects, and our results are in good agreement with experiment. A general hierarchical procedure is also provided, based on electronic structure harmonic frequencies, classical ab initio molecular dynamics, computation of nuclear quantum-mechanical eigenstates, and employing quantum wavepacket ab initio dynamics to understand vibrational spectroscopy in hydrogen-bonded clusters that display large degrees of anharmonicities.
Signal Processing for Time-Series Functions on a Graph
2018-02-01
as filtering to functions supported on graphs. These methods can be applied to scalar functions with a domain that can be described by a fixed...classical signal processing such as filtering to account for the graph domain. This work essentially divides into 2 basic approaches: graph Laplcian...based filtering and weighted adjacency matrix-based filtering . In Shuman et al.,11 and elaborated in Bronstein et al.,13 filtering operators are
NASA Astrophysics Data System (ADS)
Mojahedi, Mahdi; Shekoohinejad, Hamidreza
2018-02-01
In this paper, temperature distribution in the continuous and pulsed end-pumped Nd:YAG rod crystal is determined using nonclassical and classical heat conduction theories. In order to find the temperature distribution in crystal, heat transfer differential equations of crystal with consideration of boundary conditions are derived based on non-Fourier's model and temperature distribution of the crystal is achieved by an analytical method. Then, by transferring non-Fourier differential equations to matrix equations, using finite element method, temperature and stress of every point of crystal are calculated in the time domain. According to the results, a comparison between classical and nonclassical theories is represented to investigate rupture power values. In continuous end pumping with equal input powers, non-Fourier theory predicts greater temperature and stress compared to Fourier theory. It also shows that with an increase in relaxation time, crystal rupture power decreases. Despite of these results, in single rectangular pulsed end-pumping condition, with an equal input power, Fourier theory indicates higher temperature and stress rather than non-Fourier theory. It is also observed that, when the relaxation time increases, maximum amounts of temperature and stress decrease.
Innovative methods of knowledge transfer by multimedia library
NASA Astrophysics Data System (ADS)
Goanta, A. M.
2016-08-01
The present situation of teaching and learning new knowledge taught in the classroom is highly variable depending on the specific topics concerned. If we analyze the manifold ways of teaching / learning at university level, we can notice a very good combination between classical and modern methods. The first category includes the classic chalk blackboard teaching, followed by the also classical learning based on paper reference material. The second category includes books published in PDF or PPT [1], which are printed on the type backing CD / DVD. Since 2006 the author was concerned about the transfer of information and knowledge through video files like AVI, FLV or MPEG using various means of transfer, from the free ones (via Internet) and continuing with those involving minimal costs, i.e. on CD / DVD support. Encouraged by the students’ interest in this kind of teaching material as proved by monitoring [2] the site http://www.cursuriuniversitarebraila.ugal.ro, the author has managed to publish with ISBN the first video book in Romania, which has a non conformist content in that the chapters are located not by paging but by the hour or minutes of shooting when they were made.
Huang, Chenyu
2014-01-01
Background: Joint scar contractures are characterized by tight bands of soft tissue that bridge the 2 ends of the joint like a web. Classical treatment methods such as Z-plasties are mainly based on 2-dimensional designs. Our square flap method is an alternative surgical method that restores the span of the web in a stereometric fashion, thereby reconstructing joint function. Methods: In total, 20 Japanese patients with joint scar contractures on the axillary (n = 10) or first digital web (n = 10) underwent square flap surgery. The maximum range of motion and commissure length were measured before and after surgery. A theoretical stereometric geometrical model of the square flap was established to compare it to the classical single (60 degree), 4-flap (45 degree), and 5-flap (60 degree) Z-plasties in terms of theoretical web reconstruction efficacy. Results: All cases achieved 100% contracture release. The maximum range of motion and web space improved after square flap surgery (P = 0.001). Stereometric geometrical modeling revealed that the standard square flap (α = 45 degree; β = 90 degree) yields a larger flap area, length/width ratio, and postsurgical commissure length than the Z-plasties. It can also be adapted by varying angles α and β, although certain angle thresholds must be met to obtain the stereometric advantages of this method. Conclusions: When used to treat joint scar contractures, the square flap method can fully span the web space in a stereometric manner, thus yielding a close-to-original shape and function. Compared with the classical Z-plasties, it also provides sufficient anatomical blood supply while imposing the least physiological tension on the adjacent skin. PMID:25289342
A methodology for modeling surface effects on stiff and soft solids
NASA Astrophysics Data System (ADS)
He, Jin; Park, Harold S.
2017-09-01
We present a computational method that can be applied to capture surface stress and surface tension-driven effects in both stiff, crystalline nanostructures, like size-dependent mechanical properties, and soft solids, like elastocapillary effects. We show that the method is equivalent to the classical Young-Laplace model. The method is based on converting surface tension and surface elasticity on a zero-thickness surface to an initial stress and corresponding elastic properties on a finite thickness shell, where the consideration of geometric nonlinearity enables capturing the out-of-plane component of the surface tension that results for curved surfaces through evaluation of the surface stress in the deformed configuration. In doing so, we are able to use commercially available finite element technology, and thus do not require consideration and implementation of the classical Young-Laplace equation. Several examples are presented to demonstrate the capability of the methodology for modeling surface stress in both soft solids and crystalline nanostructures.
A methodology for modeling surface effects on stiff and soft solids
NASA Astrophysics Data System (ADS)
He, Jin; Park, Harold S.
2018-06-01
We present a computational method that can be applied to capture surface stress and surface tension-driven effects in both stiff, crystalline nanostructures, like size-dependent mechanical properties, and soft solids, like elastocapillary effects. We show that the method is equivalent to the classical Young-Laplace model. The method is based on converting surface tension and surface elasticity on a zero-thickness surface to an initial stress and corresponding elastic properties on a finite thickness shell, where the consideration of geometric nonlinearity enables capturing the out-of-plane component of the surface tension that results for curved surfaces through evaluation of the surface stress in the deformed configuration. In doing so, we are able to use commercially available finite element technology, and thus do not require consideration and implementation of the classical Young-Laplace equation. Several examples are presented to demonstrate the capability of the methodology for modeling surface stress in both soft solids and crystalline nanostructures.
On the behavior of isolated and embedded carbon nano-tubes in a polymeric matrix
NASA Astrophysics Data System (ADS)
Rahimian-Koloor, Seyed Mostafa; Moshrefzadeh-Sani, Hadi; Mehrdad Shokrieh, Mahmood; Majid Hashemianzadeh, Seyed
2018-02-01
In the classical micro-mechanical method, the moduli of the reinforcement and the matrix are used to predict the stiffness of composites. However, using the classical micro-mechanical method to predict the stiffness of CNT/epoxy nanocomposites leads to overestimated results. One of the main reasons for this overestimation is using the stiffness of the isolated CNT and ignoring the CNT nanoscale effect by the method. In the present study the non-equilibrium molecular dynamics simulation was used to consider the influence of CNT length on the stiffness of the nanocomposites through the isothermal-isobaric ensemble. The results indicated that, due to the nanoscale effects, the reinforcing efficiency of the embedded CNT is not constant and decreases with decreasing its length. Based on the results, a relationship was derived, which predicts the effective stiffness of an embedded CNT in terms of its length. It was shown that using this relationship leads to predict more accurate elastic modulus of nanocomposite, which was validated by some experimental counterparts.
Strong Similarity Measures for Ordered Sets of Documents in Information Retrieval.
ERIC Educational Resources Information Center
Egghe, L.; Michel, Christine
2002-01-01
Presents a general method to construct ordered similarity measures in information retrieval based on classical similarity measures for ordinary sets. Describes a test of some of these measures in an information retrieval system that extracted ranked document sets and discuses the practical usability of the ordered similarity measures. (Author/LRW)
Random Forest as a Predictive Analytics Alternative to Regression in Institutional Research
ERIC Educational Resources Information Center
He, Lingjun; Levine, Richard A.; Fan, Juanjuan; Beemer, Joshua; Stronach, Jeanne
2018-01-01
In institutional research, modern data mining approaches are seldom considered to address predictive analytics problems. The goal of this paper is to highlight the advantages of tree-based machine learning algorithms over classic (logistic) regression methods for data-informed decision making in higher education problems, and stress the success of…
The application of computational chemistry to lignin
Thomas Elder; Laura Berstis; Nele Sophie Zwirchmayr; Gregg T. Beckham; Michael F. Crowley
2017-01-01
Computational chemical methods have become an important technique in the examination of the structure and reactivity of lignin. The calculations can be based either on classical or quantum mechanics, with concomitant differences in computational intensity and size restrictions. The current paper will concentrate on results developed from the latter type of calculations...
NASA Astrophysics Data System (ADS)
Capannesi, Cecilia; Palchetti, Ilaria; Mascini, Marco
2000-12-01
The aim of the present work was to compare different techniques to evaluate the variation with the storage time and storage conditions in the phenolic content of an extra-virgin olive oil. A disposable screen-printed sensor (SPE) was coupled with differential pulse voltammetry (DPV) to determine the phenolic fractions after extraction with glycine buffer; DPV parameters were chosen in order to study oxidation peak of oleuropein, that was used as reference compound. Moreover a tyrosinase based biosensor operating in organic solvent (hexane) was assembled, using an amperometric oxygen probe as transducer. Calibration curves were realised in flow injection analysis (F.I.A.) using phenol as substrate. Both of these methods are easy to operate, require no extraction (biosensor) or a rapid extraction procedure (SPE), and the analysis time is short (min.). The results obtained with these two innovative procedures were compared with classical spectrophotometric assay using the Folin-Ciocalteau reagent. Other extra-virgin olive oil quality parameters were investigated with classical methods in order to better define the alteration process and results are reported.
Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.
Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping
2017-06-27
Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.
Domanski, Dominik; Murphy, Leigh C.; Borchers, Christoph H.
2010-01-01
We have developed a phosphatase-based phosphopeptide quantitation (PPQ) method for determining phosphorylation stoichiometry in complex biological samples. This PPQ method is based on enzymatic dephosphorylation, combined with specific and accurate peptide identification and quantification by multiple reaction monitoring (MRM) detection with stable-isotope-labeled standard peptides. In contrast with the classical MRM methods for the quantitation of phosphorylation stoichiometry, the PPQ-MRM method needs only one non-phosphorylated SIS (stable isotope-coded standard) and two analyses (one for the untreated and one for the phosphatase-treated sample), from which the expression and modification levels can accurately be determined. From these analyses, the % phosphorylation can be determined. In this manuscript, we compare the PPQ-MRM method with an MRM method without phosphatase, and demonstrate the application of these methods to the detection and quantitation of phosphorylation of the classic phosphorylated breast cancer biomarkers (ERα and HER2), and for phosphorylated RAF and ERK1, which also contain phosphorylation sites with important biological implications. Using synthetic peptides spiked into a complex protein digest, we were able to use our PPQ-MRM method to accurately determine the total phosphorylation stoichiometry on specific peptides, as well as the absolute amount of the peptide and phosphopeptide present. Analyses of samples containing ERα protein revealed that the PPQ-MRM is capable of determining phosphorylation stoichiometry in proteins from cell lines, and is in good agreement with determinations obtained using the direct MRM approach in terms of phosphorylation and total protein amount. PMID:20524616
Deconstructing multivariate decoding for the study of brain function.
Hebart, Martin N; Baker, Chris I
2017-08-04
Multivariate decoding methods were developed originally as tools to enable accurate predictions in real-world applications. The realization that these methods can also be employed to study brain function has led to their widespread adoption in the neurosciences. However, prior to the rise of multivariate decoding, the study of brain function was firmly embedded in a statistical philosophy grounded on univariate methods of data analysis. In this way, multivariate decoding for brain interpretation grew out of two established frameworks: multivariate decoding for predictions in real-world applications, and classical univariate analysis based on the study and interpretation of brain activation. We argue that this led to two confusions, one reflecting a mixture of multivariate decoding for prediction or interpretation, and the other a mixture of the conceptual and statistical philosophies underlying multivariate decoding and classical univariate analysis. Here we attempt to systematically disambiguate multivariate decoding for the study of brain function from the frameworks it grew out of. After elaborating these confusions and their consequences, we describe six, often unappreciated, differences between classical univariate analysis and multivariate decoding. We then focus on how the common interpretation of what is signal and noise changes in multivariate decoding. Finally, we use four examples to illustrate where these confusions may impact the interpretation of neuroimaging data. We conclude with a discussion of potential strategies to help resolve these confusions in interpreting multivariate decoding results, including the potential departure from multivariate decoding methods for the study of brain function. Copyright © 2017. Published by Elsevier Inc.
A Review of Classical Methods of Item Analysis.
ERIC Educational Resources Information Center
French, Christine L.
Item analysis is a very important consideration in the test development process. It is a statistical procedure to analyze test items that combines methods used to evaluate the important characteristics of test items, such as difficulty, discrimination, and distractibility of the items in a test. This paper reviews some of the classical methods for…
Nicolás, Paula; Lassalle, Verónica L; Ferreira, María L
2017-02-01
The aim of this manuscript was to study the application of a new method of protein quantification in Candida antarctica lipase B commercial solutions. Error sources associated to the traditional Bradford technique were demonstrated. Eight biocatalysts based on C. antarctica lipase B (CALB) immobilized onto magnetite nanoparticles were used. Magnetite nanoparticles were coated with chitosan (CHIT) and modified with glutaraldehyde (GLUT) and aminopropyltriethoxysilane (APTS). Later, CALB was adsorbed on the modified support. The proposed novel protein quantification method included the determination of sulfur (from protein in CALB solution) by means of Atomic Emission by Inductive Coupling Plasma (AE-ICP). Four different protocols were applied combining AE-ICP and classical Bradford assays, besides Carbon, Hydrogen and Nitrogen (CHN) analysis. The calculated error in protein content using the "classic" Bradford method with bovine serum albumin as standard ranged from 400 to 1200% when protein in CALB solution was quantified. These errors were calculated considering as "true protein content values" the results of the amount of immobilized protein obtained with the improved method. The optimum quantification procedure involved the combination of Bradford method, ICP and CHN analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging
NASA Astrophysics Data System (ADS)
Chen, Tao; Jin, Guanghu; Dong, Zhen
2018-04-01
Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.
Model-based Clustering of High-Dimensional Data in Astrophysics
NASA Astrophysics Data System (ADS)
Bouveyron, C.
2016-05-01
The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.
Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.
Segovia, F; Górriz, J M; Ramírez, J; Phillips, C
2016-01-01
Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods.
[Today's meaning of classical authors of political thinking].
Weinacht, Paul-Ludwig
2005-01-01
How can classical political authors be actualised? The question is asked in a discipline which is founded in old traditions: the political science. One of its great matters is the history of political ideas. Classic authors are treated in many books, but they are viewed in different perspectives; colleagues do not agree with shining and bad examples. For actualising classic we have to go a methodically reflected way: historic not historicistic, with sensibility for classic and christian norms without dogmatism or scepticism. Searching the permanent problems we try to translate the original concepts of the classic authors carefully in our time. For demonstrating our method of actualising, we choose the French classical author Montesquieu. His famous concept of division of powers is misunderstood as a "liberal" mechanism which works in itself in favour of freedom (such as Kant made work a "natural mechanism" in a people of devils in favour of their legality); in reality Montesquieu acknoledges that constitutional und organisational work cannot stabilise themselves but must be found in social character and in human virtues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, A.; Ravichandran, R.; Park, J. H.
The second-order non-Navier-Fourier constitutive laws, expressed in a compact algebraic mathematical form, were validated for the force-driven Poiseuille gas flow by the deterministic atomic-level microscopic molecular dynamics (MD). Emphasis is placed on how completely different methods (a second-order continuum macroscopic theory based on the kinetic Boltzmann equation, the probabilistic mesoscopic direct simulation Monte Carlo, and, in particular, the deterministic microscopic MD) describe the non-classical physics, and whether the second-order non-Navier-Fourier constitutive laws derived from the continuum theory can be validated using MD solutions for the viscous stress and heat flux calculated directly from the molecular data using the statistical method.more » Peculiar behaviors (non-uniform tangent pressure profile and exotic instantaneous heat conduction from cold to hot [R. S. Myong, “A full analytical solution for the force-driven compressible Poiseuille gas flow based on a nonlinear coupled constitutive relation,” Phys. Fluids 23(1), 012002 (2011)]) were re-examined using atomic-level MD results. It was shown that all three results were in strong qualitative agreement with each other, implying that the second-order non-Navier-Fourier laws are indeed physically legitimate in the transition regime. Furthermore, it was shown that the non-Navier-Fourier constitutive laws are essential for describing non-zero normal stress and tangential heat flux, while the classical and non-classical laws remain similar for shear stress and normal heat flux.« less
Infrared Ship Target Segmentation Based on Spatial Information Improved FCM.
Bai, Xiangzhi; Chen, Zhiguo; Zhang, Yu; Liu, Zhaoying; Lu, Yi
2016-12-01
Segmentation of infrared (IR) ship images is always a challenging task, because of the intensity inhomogeneity and noise. The fuzzy C-means (FCM) clustering is a classical method widely used in image segmentation. However, it has some shortcomings, like not considering the spatial information or being sensitive to noise. In this paper, an improved FCM method based on the spatial information is proposed for IR ship target segmentation. The improvements include two parts: 1) adding the nonlocal spatial information based on the ship target and 2) using the spatial shape information of the contour of the ship target to refine the local spatial constraint by Markov random field. In addition, the results of K -means are used to initialize the improved FCM method. Experimental results show that the improved method is effective and performs better than the existing methods, including the existing FCM methods, for segmentation of the IR ship images.
Mathematical methods of studying physical phenomena
NASA Astrophysics Data System (ADS)
Man'ko, Margarita A.
2013-03-01
In recent decades, substantial theoretical and experimental progress was achieved in understanding the quantum nature of physical phenomena that serves as the foundation of present and future quantum technologies. Quantum correlations like the entanglement of the states of composite systems, the phenomenon of quantum discord, which captures other aspects of quantum correlations, quantum contextuality and, connected with these phenomena, uncertainty relations for conjugate variables and entropies, like Shannon and Rényi entropies, and the inequalities for spin states, like Bell inequalities, reflect the recently understood quantum properties of micro and macro systems. The mathematical methods needed to describe all quantum phenomena mentioned above were also the subject of intense studies in the end of the last, and beginning of the new, century. In this section of CAMOP 'Mathematical Methods of Studying Physical Phenomena' new results and new trends in the rapidly developing domain of quantum (and classical) physics are presented. Among the particular topics under discussion there are some reviews on the problems of dynamical invariants and their relations with symmetries of the physical systems. In fact, this is a very old problem of both classical and quantum systems, e.g. the systems of parametric oscillators with time-dependent parameters, like Ermakov systems, which have specific constants of motion depending linearly or quadratically on the oscillator positions and momenta. Such dynamical invariants play an important role in studying the dynamical Casimir effect, the essence of the effect being the creation of photons from the vacuum in a cavity with moving boundaries due to the presence of purely quantum fluctuations of the electromagnetic field in the vacuum. It is remarkable that this effect was recently observed experimentally. The other new direction in developing the mathematical approach in physics is quantum tomography that provides a new vision of quantum states. In the tomographic picture of quantum mechanics, the states are identified with fair conditional probability distributions, which contain the same information on the states as the wave function or the density matrix. The mathematical methods of the tomographic approach are based on studying the star-product (associative product) quantization scheme. The tomographic star-product technique provides an additional understanding of the associative product, which is connected with the existence of specific pairs of operators called quantizers and dequantizers. These operators code information on the kernels of all the star-product schemes, including the traditional phase-space Weyl-Wigner-Moyal picture describing the quantum-system evolution. The new equation to find quantizers, if the kernel of the star product of functions is given, is presented in this CAMOP section. For studying classical systems, the mathematical methods developed in quantum mechanics can also be used. The case of paraxial-radiation beams propagating in waveguides is a known example of describing a purely classical phenomenon by means of quantum-like equations. Thus, some quantum phenomenon like the entanglement can be mimicked by the properties of classical beams, for example, Gaussian modes. The mathematical structures and relations to the symplectic symmetry group are analogous for both classical and quantum phenomena. Such analogies of the mathematical classical and quantum methods used in research on quantum-like communication channels provide new tools for constructing a theoretical basis of the new information-transmission technologies. The conventional quantum mechanics and its relation to classical mechanics contain mathematical recipes of the correspondence principle and quantization rules. Attempts to find rules for deriving the quantum-mechanical formalism starting from the classical field theory, taking into account the influence of classical fluctuations of the field, is considered in these papers. The methods to solve quantum equations and formulate the boundary conditions in the problems with singular potentials are connected with the mathematical problems of self-adjointness of the Hamiltonians. The progress and some new results in this direction are reflected in this CAMOP section. The Gaussian states of the photons play an important role in quantum optics. The multimode electromagnetic field and quantum correlations in the Gaussian states are considered in this section. The new results in the statistical properties of the laser radiation discussed here are based on applications of mathematical methods in this traditional domain of physics. It is worth stressing that the universality of the mathematical procedures permitted to consider the physical phenomena in the ocean is on the same footing as the phenomena in the microworld. In this CAMOP section, there are also papers devoted to traditional problems of solving the Schrödinger equation for interesting quantum systems. Recently obtained results related to different domains of theoretical physics are united by applying mathematical methods and tools, that provide new possibilities to better understand the theoretical foundations needed to develop new quantum technologies like quantum computing and quantum communications. The papers are arranged alphabetically by the name of the first author. We are grateful to all authors who accepted our invitation to contribute to this CAMOP section.
Steady Method for the Analysis of Evaporation Dynamics.
Günay, A Alperen; Sett, Soumyadip; Oh, Junho; Miljkovic, Nenad
2017-10-31
Droplet evaporation is an important phenomenon governing many man-made and natural processes. Characterizing the rate of evaporation with high accuracy has attracted the attention of numerous scientists over the past century. Traditionally, researchers have studied evaporation by observing the change in the droplet size in a given time interval. However, the transient nature coupled with the significant mass-transfer-governed gas dynamics occurring at the droplet three-phase contact line makes the classical method crude. Furthermore, the intricate balance played by the internal and external flows, evaporation kinetics, thermocapillarity, binary-mixture dynamics, curvature, and moving contact lines makes the decoupling of these processes impossible with classical transient methods. Here, we present a method to measure the rate of evaporation of spatially and temporally steady droplets. By utilizing a piezoelectric dispenser to feed microscale droplets (R ≈ 9 μm) to a larger evaporating droplet at a prescribed frequency, we can both create variable-sized droplets on any surface and study their evaporation rate by modulating the piezoelectric droplet addition frequency. Using our steady technique, we studied water evaporation of droplets having base radii ranging from 20 to 250 μm on surfaces of different functionalities (45° ≤ θ a,app ≤ 162°, where θ a,app is the apparent advancing contact angle). We benchmarked our technique with the classical unsteady method, showing an improvement of 140% in evaporation rate measurement accuracy. Our work not only characterizes the evaporation dynamics on functional surfaces but also provides an experimental platform to finally enable the decoupling of the complex physics governing the ubiquitous droplet evaporation process.
A robust component mode synthesis method for stochastic damped vibroacoustics
NASA Astrophysics Data System (ADS)
Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine
2010-01-01
In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.
Pore size distribution calculation from 1H NMR signal and N2 adsorption-desorption techniques
NASA Astrophysics Data System (ADS)
Hassan, Jamal
2012-09-01
The pore size distribution (PSD) of nano-material MCM-41 is determined using two different approaches: N2 adsorption-desorption and 1H NMR signal of water confined in silica nano-pores of MCM-41. The first approach is based on the recently modified Kelvin equation [J.V. Rocha, D. Barrera, K. Sapag, Top. Catal. 54(2011) 121-134] which deals with the known underestimation in pore size distribution for the mesoporous materials such as MCM-41 by introducing a correction factor to the classical Kelvin equation. The second method employs the Gibbs-Thompson equation, using NMR, for melting point depression of liquid in confined geometries. The result shows that both approaches give similar pore size distribution to some extent, and also the NMR technique can be considered as an alternative direct method to obtain quantitative results especially for mesoporous materials. The pore diameter estimated for the nano-material used in this study was about 35 and 38 Å for the modified Kelvin and NMR methods respectively. A comparison between these methods and the classical Kelvin equation is also presented.
Liu, Jian; Miller, William H
2007-06-21
It is shown how quantum mechanical time correlation functions [defined, e.g., in Eq. (1.1)] can be expressed, without approximation, in the same form as the linearized approximation of the semiclassical initial value representation (LSC-IVR), or classical Wigner model, for the correlation function [cf. Eq. (2.1)], i.e., as a phase space average (over initial conditions for trajectories) of the Wigner functions corresponding to the two operators. The difference is that the trajectories involved in the LSC-IVR evolve classically, i.e., according to the classical equations of motion, while in the exact theory they evolve according to generalized equations of motion that are derived here. Approximations to the exact equations of motion are then introduced to achieve practical methods that are applicable to complex (i.e., large) molecular systems. Four such methods are proposed in the paper--the full Wigner dynamics (full WD) and the second order WD based on "Wigner trajectories" [H. W. Lee and M. D. Scully, J. Chem. Phys. 77, 4604 (1982)] and the full Donoso-Martens dynamics (full DMD) and the second order DMD based on "Donoso-Martens trajectories" [A. Donoso and C. C. Martens, Phys. Rev. Lett. 8722, 223202 (2001)]--all of which can be viewed as generalizations of the original LSC-IVR method. Numerical tests of the four versions of this new approach are made for two anharmonic model problems, and for each the momentum autocorrelation function (i.e., operators linear in coordinate or momentum operators) and the force autocorrelation function (nonlinear operators) have been calculated. These four new approximate treatments are indeed seen to be significant improvements to the original LSC-IVR approximation.
Psychophysical Models for Signal Detection with Time Varying Uncertainty. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Gai, E.
1975-01-01
Psychophysical models for the behavior of the human operator in detection tasks which include change in detectability, correlation between observations and deferred decisions are developed. Classical Signal Detection Theory (SDT) is discussed and its emphasis on the sensory processes is contrasted to decision strategies. The analysis of decision strategies utilizes detection tasks with time varying signal strength. The classical theory is modified to include such tasks and several optimal decision strategies are explored. Two methods of classifying strategies are suggested. The first method is similar to the analysis of ROC curves, while the second is based on the relation between the criterion level (CL) and the detectability. Experiments to verify the analysis of tasks with changes of signal strength are designed. The results show that subjects are aware of changes in detectability and tend to use strategies that involve changes in the CL's.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raskin, Cody; Owen, J. Michael, E-mail: raskin1@llnl.gov, E-mail: mikeowen@llnl.gov
2016-11-01
We discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extension ofmore » SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.« less
NASA Astrophysics Data System (ADS)
Ogawa, Kazuhisa; Kobayashi, Hirokazu; Tomita, Akihisa
2018-02-01
The quantum interference of entangled photons forms a key phenomenon underlying various quantum-optical technologies. It is known that the quantum interference patterns of entangled photon pairs can be reconstructed classically by the time-reversal method; however, the time-reversal method has been applied only to time-frequency-entangled two-photon systems in previous experiments. Here, we apply the time-reversal method to the position-wave-vector-entangled two-photon systems: the two-photon Young interferometer and the two-photon beam focusing system. We experimentally demonstrate that the time-reversed systems classically reconstruct the same interference patterns as the position-wave-vector-entangled two-photon systems.
NASA Astrophysics Data System (ADS)
de Sousa, J. Ricardo; de Albuquerque, Douglas F.
1997-02-01
By using two approaches of renormalization group (RG), mean field RG (MFRG) and effective field RG (EFRG), we study the critical properties of the simple cubic lattice classical XY and classical Heisenberg models. The methods are illustrated by employing its simplest approximation version in which small clusters with one ( N‧ = 1) and two ( N = 2) spins are used. The thermal and magnetic critical exponents, Yt and Yh, and the critical parameter Kc are numerically obtained and are compared with more accurate methods (Monte Carlo, series expansion and ε-expansion). The results presented in this work are in excellent agreement with these sophisticated methods. We have also shown that the exponent Yh does not depend on the symmetry n of the Hamiltonian, hence the criteria of universality for this exponent is only a function of the dimension d.
Classical Swine Fever—An Updated Review
Blome, Sandra; Staubach, Christoph; Henke, Julia; Carlson, Jolene; Beer, Martin
2017-01-01
Classical swine fever (CSF) remains one of the most important transboundary viral diseases of swine worldwide. The causative agent is CSF virus, a small, enveloped RNA virus of the genus Pestivirus. Based on partial sequences, three genotypes can be distinguished that do not, however, directly correlate with virulence. Depending on both virus and host factors, a wide range of clinical syndromes can be observed and thus, laboratory confirmation is mandatory. To this means, both direct and indirect methods are utilized with an increasing degree of commercialization. Both infections in domestic pigs and wild boar are of great relevance; and wild boars are a reservoir host transmitting the virus sporadically also to pig farms. Control strategies for epidemic outbreaks in free countries are mainly based on classical intervention measures; i.e., quarantine and strict culling of affected herds. In these countries, vaccination is only an emergency option. However, live vaccines are used for controlling the disease in endemically infected regions in Asia, Eastern Europe, the Americas, and some African countries. Here, we will provide a concise, updated review on virus properties, clinical signs and pathology, epidemiology, pathogenesis and immune responses, diagnosis and vaccination possibilities. PMID:28430168
Libert, Xavier; Packeu, Ann; Bureau, Fabrice; Roosens, Nancy H; De Keersmaecker, Sigrid C J
2017-01-01
Considered as a public health problem, indoor fungal contamination is generally monitored using classical protocols based on culturing. However, this culture dependency could influence the representativeness of the fungal population detected in an analyzed sample as this includes the dead and uncultivable fraction. Moreover, culture-based protocols are often time-consuming. In this context, molecular tools are a powerful alternative, especially those allowing multiplexing. In this study a Luminex xMAP® assay was developed for the simultaneous detection of 10 fungal species which are most frequently in indoor air and that may cause health problems. This xMAP® assay was found to be sensitive, i.e. its limit of detection is ranging between 0.05 and 0.01 ng of gDNA. The assay was subsequently tested with environmental air samples which were also analyzed with a classical protocol. All the species identified with the classical method were also detected with the xMAP® assay, however in a shorter time frame. These results demonstrate that the Luminex xMAP® fungal assay developed in this study could contribute to the improvement of public health and specifically to the indoor fungal contamination treatment.
2010-01-01
Background Decision support in health systems is a highly difficult task, due to the inherent complexity of the process and structures involved. Method This paper introduces a new hybrid methodology Expert-based Cooperative Analysis (EbCA), which incorporates explicit prior expert knowledge in data analysis methods, and elicits implicit or tacit expert knowledge (IK) to improve decision support in healthcare systems. EbCA has been applied to two different case studies, showing its usability and versatility: 1) Bench-marking of small mental health areas based on technical efficiency estimated by EbCA-Data Envelopment Analysis (EbCA-DEA), and 2) Case-mix of schizophrenia based on functional dependency using Clustering Based on Rules (ClBR). In both cases comparisons towards classical procedures using qualitative explicit prior knowledge were made. Bayesian predictive validity measures were used for comparison with expert panels results. Overall agreement was tested by Intraclass Correlation Coefficient in case "1" and kappa in both cases. Results EbCA is a new methodology composed by 6 steps:. 1) Data collection and data preparation; 2) acquisition of "Prior Expert Knowledge" (PEK) and design of the "Prior Knowledge Base" (PKB); 3) PKB-guided analysis; 4) support-interpretation tools to evaluate results and detect inconsistencies (here Implicit Knowledg -IK- might be elicited); 5) incorporation of elicited IK in PKB and repeat till a satisfactory solution; 6) post-processing results for decision support. EbCA has been useful for incorporating PEK in two different analysis methods (DEA and Clustering), applied respectively to assess technical efficiency of small mental health areas and for case-mix of schizophrenia based on functional dependency. Differences in results obtained with classical approaches were mainly related to the IK which could be elicited by using EbCA and had major implications for the decision making in both cases. Discussion This paper presents EbCA and shows the convenience of completing classical data analysis with PEK as a mean to extract relevant knowledge in complex health domains. One of the major benefits of EbCA is iterative elicitation of IK.. Both explicit and tacit or implicit expert knowledge are critical to guide the scientific analysis of very complex decisional problems as those found in health system research. PMID:20920289
NASA Astrophysics Data System (ADS)
Ausloos, M.; Ivanova, K.
2004-06-01
The classical technical analysis methods of financial time series based on the moving average and momentum is recalled. Illustrations use the IBM share price and Latin American (Argentinian MerVal, Brazilian Bovespa and Mexican IPC) market indices. We have also searched for scaling ranges and exponents in exchange rates between Latin American currencies ($ARS$, $CLP$, $MXP$) and other major currencies $DEM$, $GBP$, $JPY$, $USD$, and $SDR$s. We have sorted out correlations and anticorrelations of such exchange rates with respect to $DEM$, $GBP$, $JPY$ and $USD$. They indicate a very complex or speculative behavior.
NASA Astrophysics Data System (ADS)
Savelyev, Andrey; Anisimov, Kirill; Kazhan, Egor; Kursakov, Innocentiy; Lysenkov, Alexandr
2016-10-01
The paper is devoted to the development of methodology to optimize external aerodynamics of the engine. Optimization procedure is based on numerical solution of the Reynolds-averaged Navier-Stokes equations. As a method of optimization the surrogate based method is used. As a test problem optimal shape design of turbofan nacelle is considered. The results of the first stage, which investigates classic airplane configuration with engine located under the wing, are presented. Described optimization procedure is considered in the context of multidisciplinary optimization of the 3rd generation, developed in the project AGILE.
Information categorization approach to literary authorship disputes
NASA Astrophysics Data System (ADS)
Yang, Albert C.-C.; Peng, C.-K.; Yien, H.-W.; Goldberger, Ary L.
2003-11-01
Scientific analysis of the linguistic styles of different authors has generated considerable interest. We present a generic approach to measuring the similarity of two symbolic sequences that requires minimal background knowledge about a given human language. Our analysis is based on word rank order-frequency statistics and phylogenetic tree construction. We demonstrate the applicability of this method to historic authorship questions related to the classic Chinese novel “The Dream of the Red Chamber,” to the plays of William Shakespeare, and to the Federalist papers. This method may also provide a simple approach to other large databases based on their information content.
Komorowski, Dariusz; Pietraszek, Stanislaw
2016-01-01
This paper presents the analysis of multi-channel electrogastrographic (EGG) signals using the continuous wavelet transform based on the fast Fourier transform (CWTFT). The EGG analysis was based on the determination of the several signal parameters such as dominant frequency (DF), dominant power (DP) and index of normogastria (NI). The use of continuous wavelet transform (CWT) allows for better visible localization of the frequency components in the analyzed signals, than commonly used short-time Fourier transform (STFT). Such an analysis is possible by means of a variable width window, which corresponds to the scale time of observation (analysis). Wavelet analysis allows using long time windows when we need more precise low-frequency information, and shorter when we need high frequency information. Since the classic CWT transform requires considerable computing power and time, especially while applying it to the analysis of long signals, the authors used the CWT analysis based on the fast Fourier transform (FFT). The CWT was obtained using properties of the circular convolution to improve the speed of calculation. This method allows to obtain results for relatively long records of EGG in a fairly short time, much faster than using the classical methods based on running spectrum analysis (RSA). In this study authors indicate the possibility of a parametric analysis of EGG signals using continuous wavelet transform which is the completely new solution. The results obtained with the described method are shown in the example of an analysis of four-channel EGG recordings, performed for a non-caloric meal.
Nonlinear multivariate and time series analysis by neural network methods
NASA Astrophysics Data System (ADS)
Hsieh, William W.
2004-03-01
Methods in multivariate statistical analysis are essential for working with large amounts of geophysical data, data from observational arrays, from satellites, or from numerical model output. In classical multivariate statistical analysis, there is a hierarchy of methods, starting with linear regression at the base, followed by principal component analysis (PCA) and finally canonical correlation analysis (CCA). A multivariate time series method, the singular spectrum analysis (SSA), has been a fruitful extension of the PCA technique. The common drawback of these classical methods is that only linear structures can be correctly extracted from the data. Since the late 1980s, neural network methods have become popular for performing nonlinear regression and classification. More recently, neural network methods have been extended to perform nonlinear PCA (NLPCA), nonlinear CCA (NLCCA), and nonlinear SSA (NLSSA). This paper presents a unified view of the NLPCA, NLCCA, and NLSSA techniques and their applications to various data sets of the atmosphere and the ocean (especially for the El Niño-Southern Oscillation and the stratospheric quasi-biennial oscillation). These data sets reveal that the linear methods are often too simplistic to describe real-world systems, with a tendency to scatter a single oscillatory phenomenon into numerous unphysical modes or higher harmonics, which can be largely alleviated in the new nonlinear paradigm.
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.
Huang, Chenyu; Ogawa, Rei
2014-05-01
Joint scar contractures are characterized by tight bands of soft tissue that bridge the 2 ends of the joint like a web. Classical treatment methods such as Z-plasties are mainly based on 2-dimensional designs. Our square flap method is an alternative surgical method that restores the span of the web in a stereometric fashion, thereby reconstructing joint function. In total, 20 Japanese patients with joint scar contractures on the axillary (n = 10) or first digital web (n = 10) underwent square flap surgery. The maximum range of motion and commissure length were measured before and after surgery. A theoretical stereometric geometrical model of the square flap was established to compare it to the classical single (60 degree), 4-flap (45 degree), and 5-flap (60 degree) Z-plasties in terms of theoretical web reconstruction efficacy. All cases achieved 100% contracture release. The maximum range of motion and web space improved after square flap surgery (P = 0.001). Stereometric geometrical modeling revealed that the standard square flap (α = 45 degree; β = 90 degree) yields a larger flap area, length/width ratio, and postsurgical commissure length than the Z-plasties. It can also be adapted by varying angles α and β, although certain angle thresholds must be met to obtain the stereometric advantages of this method. When used to treat joint scar contractures, the square flap method can fully span the web space in a stereometric manner, thus yielding a close-to-original shape and function. Compared with the classical Z-plasties, it also provides sufficient anatomical blood supply while imposing the least physiological tension on the adjacent skin.
NASA Astrophysics Data System (ADS)
Mohammad, Fatimah; Ansari, Rashid; Shahidi, Mahnaz
2013-03-01
The visibility and continuity of the inner segment outer segment (ISOS) junction layer of the photoreceptors on spectral domain optical coherence tomography images is known to be related to visual acuity in patients with age-related macular degeneration (AMD). Automatic detection and segmentation of lesions and pathologies in retinal images is crucial for the screening, diagnosis, and follow-up of patients with retinal diseases. One of the challenges of using the classical level-set algorithms for segmentation involves the placement of the initial contour. Manually defining the contour or randomly placing it in the image may lead to segmentation of erroneous structures. It is important to be able to automatically define the contour by using information provided by image features. We explored a level-set method which is based on the classical Chan-Vese model and which utilizes image feature information for automatic contour placement for the segmentation of pathologies in fluorescein angiograms and en face retinal images of the ISOS layer. This was accomplished by exploiting a priori knowledge of the shape and intensity distribution allowing the use of projection profiles to detect the presence of pathologies that are characterized by intensity differences with surrounding areas in retinal images. We first tested our method by applying it to fluorescein angiograms. We then applied our method to en face retinal images of patients with AMD. The experimental results included demonstrate that the proposed method provided a quick and improved outcome as compared to the classical Chan-Vese method in which the initial contour is randomly placed, thus indicating the potential to provide a more accurate and detailed view of changes in pathologies due to disease progression and treatment.
NASA Astrophysics Data System (ADS)
Gaynanova, Gulnara A.; Bekmukhametova, Alina M.; Kashapov, Ruslan R.; Ziganshina, Albina Yu.; Zakharova, Lucia Ya.
2016-05-01
Self-organization in the mixed system based on water-soluble aminomethylated calix[4]arene with sulfonatoethyl groups at the lower rim and classical cationic surfactant cetyltrimethylammonium bromide has been studied by the methods of tensiometry, conductometry, spectrophotometry, dynamic and electrophoretic light scattering. The values of the critical association concentration, the size and zeta potential values, and the solubilization capacity of mixed aggregates toward the hydrophobic probe (Sudan I) were determined.
A Bayesian approach to meta-analysis of plant pathology studies.
Mila, A L; Ngugi, H K
2011-01-01
Bayesian statistical methods are used for meta-analysis in many disciplines, including medicine, molecular biology, and engineering, but have not yet been applied for quantitative synthesis of plant pathology studies. In this paper, we illustrate the key concepts of Bayesian statistics and outline the differences between Bayesian and classical (frequentist) methods in the way parameters describing population attributes are considered. We then describe a Bayesian approach to meta-analysis and present a plant pathological example based on studies evaluating the efficacy of plant protection products that induce systemic acquired resistance for the management of fire blight of apple. In a simple random-effects model assuming a normal distribution of effect sizes and no prior information (i.e., a noninformative prior), the results of the Bayesian meta-analysis are similar to those obtained with classical methods. Implementing the same model with a Student's t distribution and a noninformative prior for the effect sizes, instead of a normal distribution, yields similar results for all but acibenzolar-S-methyl (Actigard) which was evaluated only in seven studies in this example. Whereas both the classical (P = 0.28) and the Bayesian analysis with a noninformative prior (95% credibility interval [CRI] for the log response ratio: -0.63 to 0.08) indicate a nonsignificant effect for Actigard, specifying a t distribution resulted in a significant, albeit variable, effect for this product (CRI: -0.73 to -0.10). These results confirm the sensitivity of the analytical outcome (i.e., the posterior distribution) to the choice of prior in Bayesian meta-analyses involving a limited number of studies. We review some pertinent literature on more advanced topics, including modeling of among-study heterogeneity, publication bias, analyses involving a limited number of studies, and methods for dealing with missing data, and show how these issues can be approached in a Bayesian framework. Bayesian meta-analysis can readily include information not easily incorporated in classical methods, and allow for a full evaluation of competing models. Given the power and flexibility of Bayesian methods, we expect them to become widely adopted for meta-analysis of plant pathology studies.
Application of singular value decomposition to structural dynamics systems with constraints
NASA Technical Reports Server (NTRS)
Juang, J.-N.; Pinson, L. D.
1985-01-01
Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.
Morabia, Alfredo
2015-03-18
Before World War II, epidemiology was a small discipline, practiced by a handful of people working mostly in the United Kingdom and in the United States. Today it is practiced by tens of thousands of people on all continents. Between 1945 and 1965, during what is known as its "classical" phase, epidemiology became recognized as a major academic discipline in medicine and public health. On the basis of a review of the historical evidence, this article examines to which extent classical epidemiology has been a golden age of an action-driven, problem-solving science, in which epidemiologists were less concerned with the sophistication of their methods than with the societal consequences of their work. It also discusses whether the paucity of methods stymied or boosted classical epidemiology's ability to convince political and financial agencies about the need to intervene in order to improve the health of the people.
Report on the Implementation of Homogeneous Nucleation Scheme in MARMOT-based Phase Field Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yulan; Hu, Shenyang Y.; Sun, Xin
2013-09-30
In this report, we summarized our effort in developing mesoscale phase field models for predicting precipitation kinetics in alloys during thermal aging and/or under irradiation in nuclear reactors. The first part focused on developing a method to predict the thermodynamic properties of critical nuclei such as the sizes and concentration profiles of critical nuclei, and nucleation barrier. These properties are crucial for quantitative simulations of precipitate evolution kinetics with phase field models. Fe-Cr alloy was chosen as a model alloy because it has valid thermodynamic and kinetic data as well as it is an important structural material in nuclear reactors.more » A constrained shrinking dimer dynamics (CSDD) method was developed to search for the energy minimum path during nucleation. With the method we are able to predict the concentration profiles of the critical nuclei of Cr-rich precipitates and nucleation energy barriers. Simulations showed that Cr concentration distribution in the critical nucleus strongly depends on the overall Cr concentration as well as temperature. The Cr concentration inside the critical nucleus is much smaller than the equilibrium concentration calculated by the equilibrium phase diagram. This implies that a non-classical nucleation theory should be used to deal with the nucleation of Cr precipitates in Fe-Cr alloys. The growth kinetics of both classical and non-classical nuclei was investigated by the phase field approach. A number of interesting phenomena were observed from the simulations: 1) a critical classical nucleus first shrinks toward its non-classical nucleus and then grows; 2) a non-classical nucleus has much slower growth kinetics at its earlier growth stage compared to the diffusion-controlled growth kinetics. 3) a critical classical nucleus grows faster at the earlier growth stage than the non-classical nucleus. All of these results demonstrated that it is critical to introduce the correct critical nuclei into phase field modeling in order to correctly capture the kinetics of precipitation. In most alloys the matrix phase and precipitate phase have different concentrations as well as different crystal structures. For example, Cu precipitates in FeCu alloys have fcc crystal structure while the matrix Fe-Cu solid solution has bcc structure at low temperature. The WBM model and KimS model, where both concentrations and order parameters are chosen to describe the microstructures, are commonly used to model precipitations in such alloys. The WBM and KimS models have not been implemented into Marmot yet. In the second part of this report, we focused on implementing the WBM and KimS models into Marmot. The Fe-Cu alloys, which are important structure materials in nuclear reactors, was taken as the model alloys to test the models.« less
Integration of heterogeneous data for classification in hyperspectral satellite imagery
NASA Astrophysics Data System (ADS)
Benedetto, J.; Czaja, W.; Dobrosotskaya, J.; Doster, T.; Duke, K.; Gillis, D.
2012-06-01
As new remote sensing modalities emerge, it becomes increasingly important to nd more suitable algorithms for fusion and integration of dierent data types for the purposes of target/anomaly detection and classication. Typical techniques that deal with this problem are based on performing detection/classication/segmentation separately in chosen modalities, and then integrating the resulting outcomes into a more complete picture. In this paper we provide a broad analysis of a new approach, based on creating fused representations of the multi- modal data, which then can be subjected to analysis by means of the state-of-the-art classiers or detectors. In this scenario we shall consider the hyperspectral imagery combined with spatial information. Our approach involves machine learning techniques based on analysis of joint data-dependent graphs and their associated diusion kernels. Then, the signicant eigenvectors of the derived fused graph Laplace operator form the new representation, which provides integrated features from the heterogeneous input data. We compare these fused approaches with analysis of integrated outputs of spatial and spectral graph methods.
Biogeochemical behaviour and bioremediation of uranium in waters of abandoned mines.
Mkandawire, Martin
2013-11-01
The discharges of uranium and associated radionuclides as well as heavy metals and metalloids from waste and tailing dumps in abandoned uranium mining and processing sites pose contamination risks to surface and groundwater. Although many more are being planned for nuclear energy purposes, most of the abandoned uranium mines are a legacy of uranium production that fuelled arms race during the cold war of the last century. Since the end of cold war, there have been efforts to rehabilitate the mining sites, initially, using classical remediation techniques based on high chemical and civil engineering. Recently, bioremediation technology has been sought as alternatives to the classical approach due to reasons, which include: (a) high demand of sites requiring remediation; (b) the economic implication of running and maintaining the facilities due to high energy and work force demand; and (c) the pattern and characteristics of contaminant discharges in most of the former uranium mining and processing sites prevents the use of classical methods. This review discusses risks of uranium contamination from abandoned uranium mines from the biogeochemical point of view and the potential and limitation of uranium bioremediation technique as alternative to classical approach in abandoned uranium mining and processing sites.
Magnetic resonance image segmentation using multifractal techniques
NASA Astrophysics Data System (ADS)
Yu, Yue-e.; Wang, Fang; Liu, Li-lin
2015-11-01
In order to delineate target region for magnetic resonance image (MRI) with diseases, the classical multifractal spectrum (MFS)-segmentation method and latest multifractal detrended fluctuation spectrum (MF-DFS)-based segmentation method are employed in our study. One of our main conclusions from experiments is that both of the two multifractal-based methods are workable for handling MRIs. The best result is obtained by MF-DFS-based method using Lh10 as local characteristic. The anti-noises experiments also suppot the conclusion. This interest finding shows that the features can be better represented by the strong fluctuations instead of the weak fluctuations for the MRIs. By comparing the multifractal nature between lesion and non-lesion area on the basis of the segmentation results, an interest finding is that the gray value's fluctuation in lesion area is much severer than that in non-lesion area.
Finite-element grid improvement by minimization of stiffness matrix trace
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.
1989-01-01
A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.
Finite-element grid improvement by minimization of stiffness matrix trace
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.
1987-01-01
A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.
Methodological issues underlying multiple decrement life table analysis.
Mode, C J; Avery, R C; Littman, G S; Potter, R G
1977-02-01
In this paper, the actuarial method of multiple decrement life table analysis of censored, longitudinal data is examined. The discussion is organized in terms of the first segment of usage of an intrauterine device. Weaknesses of the actuarial approach are pointed out, and an alternative approach, based on the classical model of competing risks, is proposed. Finally, the actuarial and the alternative method of analyzing censored data are compared, using data from the Taichung Medical Study on Intrauterine Devices.
Schmid, Matthias; Küchenhoff, Helmut; Hoerauf, Achim; Tutz, Gerhard
2016-02-28
Survival trees are a popular alternative to parametric survival modeling when there are interactions between the predictor variables or when the aim is to stratify patients into prognostic subgroups. A limitation of classical survival tree methodology is that most algorithms for tree construction are designed for continuous outcome variables. Hence, classical methods might not be appropriate if failure time data are measured on a discrete time scale (as is often the case in longitudinal studies where data are collected, e.g., quarterly or yearly). To address this issue, we develop a method for discrete survival tree construction. The proposed technique is based on the result that the likelihood of a discrete survival model is equivalent to the likelihood of a regression model for binary outcome data. Hence, we modify tree construction methods for binary outcomes such that they result in optimized partitions for the estimation of discrete hazard functions. By applying the proposed method to data from a randomized trial in patients with filarial lymphedema, we demonstrate how discrete survival trees can be used to identify clinically relevant patient groups with similar survival behavior. Copyright © 2015 John Wiley & Sons, Ltd.
Functional imaging with low-resolution brain electromagnetic tomography (LORETA): a review.
Pascual-Marqui, R D; Esslen, M; Kochi, K; Lehmann, D
2002-01-01
This paper reviews several recent publications that have successfully used the functional brain imaging method known as LORETA. Emphasis is placed on the electrophysiological and neuroanatomical basis of the method, on the localization properties of the method, and on the validation of the method in real experimental human data. Papers that criticize LORETA are briefly discussed. LORETA publications in the 1994-1997 period based localization inference on images of raw electric neuronal activity. In 1998, a series of papers appeared that based localization inference on the statistical parametric mapping methodology applied to high-time resolution LORETA images. Starting in 1999, quantitative neuroanatomy was added to the methodology, based on the digitized Talairach atlas provided by the Brain Imaging Centre, Montreal Neurological Institute. The combination of these methodological developments has placed LORETA at a level that compares favorably to the more classical functional imaging methods, such as PET and fMRI.
A combined emitter threat assessment method based on ICW-RCM
NASA Astrophysics Data System (ADS)
Zhang, Ying; Wang, Hongwei; Guo, Xiaotao; Wang, Yubing
2017-08-01
Considering that the tradition al emitter threat assessment methods are difficult to intuitively reflect the degree of target threaten and the deficiency of real-time and complexity, on the basis of radar chart method(RCM), an algorithm of emitter combined threat assessment based on ICW-RCM (improved combination weighting method, ICW) is proposed. The coarse sorting is integrated with fine sorting in emitter combined threat assessment, sequencing the emitter threat level roughly accordance to radar operation mode, and reducing task priority of the low-threat emitter; On the basis of ICW-RCM, sequencing the same radar operation mode emitter roughly, finally, obtain the results of emitter threat assessment through coarse and fine sorting. Simulation analyses show the correctness and effectiveness of this algorithm. Comparing with classical method of emitter threat assessment based on CW-RCM, the algorithm is visual in image and can work quickly with lower complexity.
Visual attention based bag-of-words model for image classification
NASA Astrophysics Data System (ADS)
Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che
2014-04-01
Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.
Approximation of Nash equilibria and the network community structure detection problem
2017-01-01
Game theory based methods designed to solve the problem of community structure detection in complex networks have emerged in recent years as an alternative to classical and optimization based approaches. The Mixed Nash Extremal Optimization uses a generative relation for the characterization of Nash equilibria to identify the community structure of a network by converting the problem into a non-cooperative game. This paper proposes a method to enhance this algorithm by reducing the number of payoff function evaluations. Numerical experiments performed on synthetic and real-world networks show that this approach is efficient, with results better or just as good as other state-of-the-art methods. PMID:28467496
Line mixing effects in isotropic Raman spectra of pure N{sub 2}: A classical trajectory study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, Sergey V., E-mail: serg.vict.ivanov@gmail.com; Boulet, Christian; Buzykin, Oleg G.
2014-11-14
Line mixing effects in the Q branch of pure N{sub 2} isotropic Raman scattering are studied at room temperature using a classical trajectory method. It is the first study using an extended modified version of Gordon's classical theory of impact broadening and shift of rovibrational lines. The whole relaxation matrix is calculated using an exact 3D classical trajectory method for binary collisions of rigid N{sub 2} molecules employing the most up-to-date intermolecular potential energy surface (PES). A simple symmetrizing procedure is employed to improve off-diagonal cross-sections to make them obeying exactly the principle of detailed balance. The adequacy of themore » results is confirmed by the sum rule. The comparison is made with available experimental data as well as with benchmark fully quantum close coupling [F. Thibault, C. Boulet, and Q. Ma, J. Chem. Phys. 140, 044303 (2014)] and refined semi-classical Robert-Bonamy [C. Boulet, Q. Ma, and F. Thibault, J. Chem. Phys. 140, 084310 (2014)] results. All calculations (classical, quantum, and semi-classical) were made using the same PES. The agreement between classical and quantum relaxation matrices is excellent, opening the way to the analysis of more complex molecular systems.« less
Quantum realization of the bilinear interpolation method for NEQR.
Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou
2017-05-31
In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.
NASA Astrophysics Data System (ADS)
Rezaei Kivi, Araz; Azizi, Saber; Norouzi, Peyman
2017-12-01
In this paper, the nonlinear size-dependent static and dynamic behavior of an electrostatically actuated nano-beam is investigated. A fully clamped nano-beam is considered for the modeling of the deformable electrode of the NEMS. The governing differential equation of the motion is derived using Hamiltonian principle based on couple stress theory; a non-classical theory for considering length scale effects. The nonlinear partial differential equation of the motion is discretized to a nonlinear Duffing type ODE's using Galerkin method. Static and dynamic pull-in instabilities obtained by both classical theory and MCST are compared. At the second stage of analysis, shooting technique is utilized to obtain the frequency response curve, and to capture the periodic solutions of the motion; the stability of the periodic solutions are gained by Floquet theory. The nonlinear dynamic behavior of the deformable electrode due to the AC harmonic accompanied with size dependency is investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Weiwei; Domcke, Wolfgang; Farantos, Stavros C.
A trajectory method of calculating tunneling probabilities from phase integrals along straight line tunneling paths, originally suggested by Makri and Miller [J. Chem. Phys. 91, 4026 (1989)] and recently implemented by Truhlar and co-workers [Chem. Sci. 5, 2091 (2014)], is tested for one- and two-dimensional ab initio based potentials describing hydrogen dissociation in the {sup 1}B{sub 1} excited electronic state of pyrrole. The primary observables are the tunneling rates in a progression of bending vibrational states lying below the dissociation barrier and their isotope dependences. Several initial ensembles of classical trajectories have been considered, corresponding to the quasiclassical and themore » quantum mechanical samplings of the initial conditions. It is found that the sampling based on the fixed energy Wigner density gives the best agreement with the quantum mechanical dissociation rates.« less
Cognitive Radios Exploiting Gray Spaces via Compressed Sensing
NASA Astrophysics Data System (ADS)
Wieruch, Dennis; Jung, Peter; Wirth, Thomas; Dekorsy, Armin; Haustein, Thomas
2016-07-01
We suggest an interweave cognitive radio system with a gray space detector, which is properly identifying a small fraction of unused resources within an active band of a primary user system like 3GPP LTE. Therefore, the gray space detector can cope with frequency fading holes and distinguish them from inactive resources. Different approaches of the gray space detector are investigated, the conventional reduced-rank least squares method as well as the compressed sensing-based orthogonal matching pursuit and basis pursuit denoising algorithm. In addition, the gray space detector is compared with the classical energy detector. Simulation results present the receiver operating characteristic at several SNRs and the detection performance over further aspects like base station system load for practical false alarm rates. The results show, that especially for practical false alarm rates the compressed sensing algorithm are more suitable than the classical energy detector and reduced-rank least squares approach.
Plasmonics of 2D Nanomaterials: Properties and Applications
Li, Yu; Li, Ziwei; Chi, Cheng; Shan, Hangyong; Zheng, Liheng
2017-01-01
Plasmonics has developed for decades in the field of condensed matter physics and optics. Based on the classical Maxwell theory, collective excitations exhibit profound light‐matter interaction properties beyond classical physics in lots of material systems. With the development of nanofabrication and characterization technology, ultra‐thin two‐dimensional (2D) nanomaterials attract tremendous interest and show exceptional plasmonic properties. Here, we elaborate the advanced optical properties of 2D materials especially graphene and monolayer molybdenum disulfide (MoS2), review the plasmonic properties of graphene, and discuss the coupling effect in hybrid 2D nanomaterials. Then, the plasmonic tuning methods of 2D nanomaterials are presented from theoretical models to experimental investigations. Furthermore, we reveal the potential applications in photocatalysis, photovoltaics and photodetections, based on the development of 2D nanomaterials, we make a prospect for the future theoretical physics and practical applications. PMID:28852608
Research on Bayes matting algorithm based on Gaussian mixture model
NASA Astrophysics Data System (ADS)
Quan, Wei; Jiang, Shan; Han, Cheng; Zhang, Chao; Jiang, Zhengang
2015-12-01
The digital matting problem is a classical problem of imaging. It aims at separating non-rectangular foreground objects from a background image, and compositing with a new background image. Accurate matting determines the quality of the compositing image. A Bayesian matting Algorithm Based on Gaussian Mixture Model is proposed to solve this matting problem. Firstly, the traditional Bayesian framework is improved by introducing Gaussian mixture model. Then, a weighting factor is added in order to suppress the noises of the compositing images. Finally, the effect is further improved by regulating the user's input. This algorithm is applied to matting jobs of classical images. The results are compared to the traditional Bayesian method. It is shown that our algorithm has better performance in detail such as hair. Our algorithm eliminates the noise well. And it is very effectively in dealing with the kind of work, such as interested objects with intricate boundaries.
Based on Artificial Neural Network to Realize K-Parameter Analysis of Vehicle Air Spring System
NASA Astrophysics Data System (ADS)
Hung, San-Shan; Hsu, Chia-Ning; Hwang, Chang-Chou; Chen, Wen-Jan
2017-10-01
In recent years, because of the air-spring control technique is more mature, that air- spring suspension systems already can be used to replace the classical vehicle suspension system. Depend on internal pressure variation of the air-spring, thestiffnessand the damping factor can be adjusted. Because of air-spring has highly nonlinear characteristic, therefore it isn’t easy to construct the classical controller to control the air-spring effectively. The paper based on Artificial Neural Network to propose a feasible control strategy. By using offline way for the neural network design and learning to the air-spring in different initial pressures and different loads, offline method through, predict air-spring stiffness parameter to establish a model. Finally, through adjusting air-spring internal pressure to change the K-parameter of the air-spring, realize the well dynamic control performance of air-spring suspension.
Entanglement-Based Machine Learning on a Quantum Computer
NASA Astrophysics Data System (ADS)
Cai, X.-D.; Wu, D.; Su, Z.-E.; Chen, M.-C.; Wang, X.-L.; Li, Li; Liu, N.-L.; Lu, C.-Y.; Pan, J.-W.
2015-03-01
Machine learning, a branch of artificial intelligence, learns from previous experience to optimize performance, which is ubiquitous in various fields such as computer sciences, financial analysis, robotics, and bioinformatics. A challenge is that machine learning with the rapidly growing "big data" could become intractable for classical computers. Recently, quantum machine learning algorithms [Lloyd, Mohseni, and Rebentrost, arXiv.1307.0411] were proposed which could offer an exponential speedup over classical algorithms. Here, we report the first experimental entanglement-based classification of two-, four-, and eight-dimensional vectors to different clusters using a small-scale photonic quantum computer, which are then used to implement supervised and unsupervised machine learning. The results demonstrate the working principle of using quantum computers to manipulate and classify high-dimensional vectors, the core mathematical routine in machine learning. The method can, in principle, be scaled to larger numbers of qubits, and may provide a new route to accelerate machine learning.
NASA Astrophysics Data System (ADS)
Lobb, Dan
2017-11-01
One of the most significant problems for space-based spectro-radiometer systems, observing Earth from space in the solar spectral band (UV through short-wave IR), is in achievement of the required absolute radiometric accuracy. Classical methods, for example using one or more sun-illuminated diffusers as reflectance standards, do not generally provide methods for monitoring degradation of the in-flight reference after pre-flight characterisation. Ratioing methods have been proposed that provide monitoring of degradation of solar attenuators in flight, thus in principle allowing much higher confidence in absolute response calibration. Two example methods are described. It is shown that systems can be designed for relatively low size and without significant additions to the complexity of flight hardware.
Height-Based Indices of Pubertal Timing in Male Adolescents
ERIC Educational Resources Information Center
Khairullah, Ammar; May, Margaret T.; Tilling, Kate; Howe, Laura D.; Leonard, Gabriel; Perrond, Michel; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš
2013-01-01
It is important to account for timing of puberty when studying the adolescent brain and cognition. The use of classical methods for assessing pubertal status may not be feasible in some studies, especially in male adolescents. Using data from a sample of 478 males from a longitudinal birth cohort, we describe the calculations of three independent…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fedorenko, V.K.; Sergeev, V.V.; Shkanov, I.N.
The influence of the structural, phase, and size factors, and the bonding of hard tungsten alloys to titanium alloy bases on the mechanism by which the system fails under alternating loads is studied. The failure mechanism of materials with detonation coatings applied by different methods is discussed in regard to the classical sequence of fatigue phenomena, i.e., hardening-softening and crack nucleation and growth.
How Often Do Subscores Have Added Value? Results from Operational and Simulated Data
ERIC Educational Resources Information Center
Sinharay, Sandip
2010-01-01
Recently, there has been an increasing level of interest in subscores for their potential diagnostic value. Haberman suggested a method based on classical test theory to determine whether subscores have added value over total scores. In this article I first provide a rich collection of results regarding when subscores were found to have added…
ERIC Educational Resources Information Center
Shinn, Glen C.; Briers, Gary; Baker, Matt
2008-01-01
In this study, the researchers used a classical Delphi method to re-examine the conceptual framework, definition, and knowledge base of the field. Seventeen engaged scholars, each representing the expert agricultural education community, reached consensus on defining the field of study, 10 knowledge domains, and 67 knowledge objects. The Delphi…
A Longitudinal Study Assessing the Microsoft Office Skills Course
ERIC Educational Resources Information Center
Carpenter, Donald A.; McGinnis, Denise; Slauson, Gayla Jo; Snyder, Johnny
2013-01-01
This paper explains a four-year longitudinal study of the assessment process for a Microsoft Office skills course. It examines whether there is an increase in students' knowledge based on responses to pre- and post-surveys that asked students to evaluate how well they can do particular tasks. Classical classroom teaching methods were used in the…
Fu, Gregory C
2017-07-26
Classical methods for achieving nucleophilic substitutions of alkyl electrophiles (S N 1 and S N 2) have limited scope and are not generally amenable to enantioselective variants that employ readily available racemic electrophiles. Radical-based pathways catalyzed by chiral transition-metal complexes provide an attractive approach to addressing these limitations.
2017-01-01
Classical methods for achieving nucleophilic substitutions of alkyl electrophiles (SN1 and SN2) have limited scope and are not generally amenable to enantioselective variants that employ readily available racemic electrophiles. Radical-based pathways catalyzed by chiral transition-metal complexes provide an attractive approach to addressing these limitations. PMID:28776010
Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P.
2016-01-01
Purpose Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. Theory and Methods The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly-accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely-used calibrationless uniformly-undersampled trajectories. Results Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. Conclusion The SENSE-LORAKS framework provides promising new opportunities for highly-accelerated MRI. PMID:27037836
Quantum realization of the nearest neighbor value interpolation method for INEQR
NASA Astrophysics Data System (ADS)
Zhou, RiGui; Hu, WenWen; Luo, GaoFeng; Liu, XingAo; Fan, Ping
2018-07-01
This paper presents the nearest neighbor value (NNV) interpolation algorithm for the improved novel enhanced quantum representation of digital images (INEQR). It is necessary to use interpolation in image scaling because there is an increase or a decrease in the number of pixels. The difference between the proposed scheme and nearest neighbor interpolation is that the concept applied, to estimate the missing pixel value, is guided by the nearest value rather than the distance. Firstly, a sequence of quantum operations is predefined, such as cyclic shift transformations and the basic arithmetic operations. Then, the feasibility of the nearest neighbor value interpolation method for quantum image of INEQR is proven using the previously designed quantum operations. Furthermore, quantum image scaling algorithm in the form of circuits of the NNV interpolation for INEQR is constructed for the first time. The merit of the proposed INEQR circuit lies in their low complexity, which is achieved by utilizing the unique properties of quantum superposition and entanglement. Finally, simulation-based experimental results involving different classical images and ratios (i.e., conventional or non-quantum) are simulated based on the classical computer's MATLAB 2014b software, which demonstrates that the proposed interpolation method has higher performances in terms of high resolution compared to the nearest neighbor and bilinear interpolation.
In vitro dynamic model simulating the digestive tract of 6-month-old infants.
Passannanti, Francesca; Nigro, Federica; Gallo, Marianna; Tornatore, Fabio; Frasso, Annalisa; Saccone, Giulia; Budelli, Andrea; Barone, Maria V; Nigro, Roberto
2017-01-01
In vivo assays cannot always be conducted because of ethical reasons, technical constraints or costs, but a better understanding of the digestive process, especially in infants, could be of great help in preventing food-related pathologies and in developing new formulas with health benefits. In this context, in vitro dynamic systems to simulate human digestion and, in particular, infant digestion could become increasingly valuable. To simulate the digestive process through the use of a dynamic model of the infant gastroenteric apparatus to study the digestibility of starch-based infant foods. Using M.I.D.A (Model of an Infant Digestive Apparatus), the oral, gastric and intestinal digestibility of two starch-based products were measured: 1) rice starch mixed with distilled water and treated using two different sterilization methods (the classical method with a holding temperature of 121°C for 37 min and the HTST method with a holding temperature of 137°C for 70 sec) and 2) a rice cream with (premium product) or without (basic product) an aliquot of rice flour fermented by Lactobacillus paracasei CBA L74. After the digestion the foods were analyzed for the starch concentration, the amount of D-glucose released and the percentage of hydrolyzed starch. An in vitro dynamic system, which was referred to as M.I.D.A., was obtained. Using this system, the starch digestion occurred only during the oral and intestinal phase, as expected. The D-glucose released during the intestinal phase was different between the classical and HTST methods (0.795 grams for the HTST versus 0.512 for the classical product). The same analysis was performed for the basic and premium products. In this case, the premium product had a significant difference in terms of the starch hydrolysis percentage during the entire process. The M.I.D.A. system was able to digest simple starches and a more complex food in the correct compartments. In this study, better digestibility of the premium product was revealed.
[Diagnosis of tropical malaria by express-methods].
Popov, A F; Nikiforov, N D; Ivanis, V A; Barkun, S P; Sanin, B I; Fed'kina, L I
2004-01-01
An examination of a thick blood drop and of blood smear for the presence of plasmodia is a classic and indisputable diagnostic test for tropic malaria. However, express-methods, based on the immune-enzyme analysis, have been introduced into the health-care practice primarily in developing and underdeveloped countries. The diagnosis of tropic malaria by using the discussed methods enables, in the non-laboratory settings, a rapid and reliable detection of PI. falciparum in blood. This is important because an untimely diagnosis of tropic malaria increases the risk of the lethal outcome.
NASA Technical Reports Server (NTRS)
Adams, Gaynor J; DUGAN DUANE W
1952-01-01
A method of analysis based on slender-wing theory is developed to investigate the characteristics in roll of slender cruciform wings and wing-body combinations. The method makes use of the conformal mapping processes of classical hydrodynamics which transform the region outside a circle and the region outside an arbitrary arrangement of line segments intersecting at the origin. The method of analysis may be utilized to solve other slender cruciform wing-body problems involving arbitrarily assigned boundary conditions. (author)
Velopharyngeal Port Status during Classical Singing
ERIC Educational Resources Information Center
Tanner, Kristine; Roy, Nelson; Merrill, Ray M.; Power, David
2005-01-01
Purpose: This investigation was undertaken to examine the status of the velopharyngeal (VP) port during classical singing. Method: Using aeromechanical instrumentation, nasal airflow (mL/s), oral pressure (cm H[subscript 2]O), and VP orifice area estimates (cm[squared]) were studied in 10 classically trained sopranos during singing and speaking.…
Classical and quantum Big Brake cosmology for scalar field and tachyonic models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamenshchik, A. Yu.; Manti, S.
We study a relation between the cosmological singularities in classical and quantum theory, comparing the classical and quantum dynamics in some models possessing the Big Brake singularity - the model based on a scalar field and two models based on a tachyon-pseudo-tachyon field . It is shown that the effect of quantum avoidance is absent for the soft singularities of the Big Brake type while it is present for the Big Bang and Big Crunch singularities. Thus, there is some kind of a classical - quantum correspondence, because soft singularities are traversable in classical cosmology, while the strong Big Bangmore » and Big Crunch singularities are not traversable.« less
Multimodal Medical Image Fusion by Adaptive Manifold Filter.
Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna
2015-01-01
Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.
Schmiedt, Hanno; Schlemmer, Stephan; Yurchenko, Sergey N.; Yachmenev, Andrey
2017-01-01
We report a new semi-classical method to compute highly excited rotational energy levels of an asymmetric-top molecule. The method forgoes the idea of a full quantum mechanical treatment of the ro-vibrational motion of the molecule. Instead, it employs a semi-classical Green's function approach to describe the rotational motion, while retaining a quantum mechanical description of the vibrations. Similar approaches have existed for some time, but the method proposed here has two novel features. First, inspired by the path integral method, periodic orbits in the phase space and tunneling paths are naturally obtained by means of molecular symmetry analysis. Second, the rigorous variational method is employed for the first time to describe the molecular vibrations. In addition, we present a new robust approach to generating rotational energy surfaces for vibrationally excited states; this is done in a fully quantum-mechanical, variational manner. The semi-classical approach of the present work is applied to calculating the energies of very highly excited rotational states and it reduces dramatically the computing time as well as the storage and memory requirements when compared to the fullly quantum-mechanical variational approach. Test calculations for excited states of SO2 yield semi-classical energies in very good agreement with the available experimental data and the results of fully quantum-mechanical calculations. PMID:28000807
Phase synchronization based on a Dual-Tree Complex Wavelet Transform
NASA Astrophysics Data System (ADS)
Ferreira, Maria Teodora; Domingues, Margarete Oliveira; Macau, Elbert E. N.
2016-11-01
In this work, we show the applicability of our Discrete Complex Wavelet Approach (DCWA) to verify the phenomenon of phase synchronization transition in two coupled chaotic Lorenz systems. DCWA is based on the phase assignment from complex wavelet coefficients obtained by using a Dual-Tree Complex Wavelet Transform (DT-CWT). We analyzed two coupled chaotic Lorenz systems, aiming to detect the transition from non-phase synchronization to phase synchronization. In addition, we check how good is the method in detecting periods of 2π phase-slips. In all experiments, DCWA is compared with classical phase detection methods such as the ones based on arctangent and Hilbert transform showing a much better performance.
Valeriani, Federica; Agodi, Antonella; Casini, Beatrice; Cristina, Maria Luisa; D'Errico, Marcello Mario; Gianfranceschi, Gianluca; Liguori, Giorgio; Liguori, Renato; Mucci, Nicolina; Mura, Ida; Pasquarella, Cesira; Piana, Andrea; Sotgiu, Giovanni; Privitera, Gaetano; Protano, Carmela; Quattrocchi, Annalisa; Ripabelli, Giancarlo; Rossini, Angelo; Spagnolo, Anna Maria; Tamburro, Manuela; Tardivo, Stefano; Veronesi, Licia; Vitali, Matteo; Romano Spica, Vincenzo
2018-02-01
Reprocessing of endoscopes is key to preventing cross-infection after colonoscopy. Culture-based methods are recommended for monitoring, but alternative and rapid approaches are needed to improve surveillance and reduce turnover times. A molecular strategy based on detection of residual traces from gut microbiota was developed and tested using a multicenter survey. A simplified sampling and DNA extraction protocol using nylon-tipped flocked swabs was optimized. A multiplex real-time polymerase chain reaction (PCR) test was developed that targeted 6 bacteria genes that were amplified in 3 mixes. The method was validated by interlaboratory tests involving 5 reference laboratories. Colonoscopy devices (n = 111) were sampled in 10 Italian hospitals. Culture-based microbiology and metagenomic tests were performed to verify PCR data. The sampling method was easily applied in all 10 endoscopy units and the optimized DNA extraction and amplification protocol was successfully performed by all of the involved laboratories. This PCR-based method allowed identification of both contaminated (n = 59) and fully reprocessed endoscopes (n = 52) with high sensibility (98%) and specificity (98%), within 3-4 hours, in contrast to the 24-72 hours needed for a classic microbiology test. Results were confirmed by next-generation sequencing and classic microbiology. A novel approach for monitoring reprocessing of colonoscopy devices was developed and successfully applied in a multicenter survey. The general principle of tracing biological fluids through microflora DNA amplification was successfully applied and may represent a promising approach for hospital hygiene. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Trajectory-based understanding of the quantum-classical transition for barrier scattering
NASA Astrophysics Data System (ADS)
Chou, Chia-Chun
2018-06-01
The quantum-classical transition of wave packet barrier scattering is investigated using a hydrodynamic description in the framework of a nonlinear Schrödinger equation. The nonlinear equation provides a continuous description for the quantum-classical transition of physical systems by introducing a degree of quantumness. Based on the transition equation, the transition trajectory formalism is developed to establish the connection between classical and quantum trajectories. The quantum-classical transition is then analyzed for the scattering of a Gaussian wave packet from an Eckart barrier and the decay of a metastable state. Computational results for the evolution of the wave packet and the transmission probabilities indicate that classical results are recovered when the degree of quantumness tends to zero. Classical trajectories are in excellent agreement with the transition trajectories in the classical limit, except in some regions where transition trajectories cannot cross because of the single-valuedness of the transition wave function. As the computational results demonstrate, the process that the Planck constant tends to zero is equivalent to the gradual removal of quantum effects originating from the quantum potential. This study provides an insightful trajectory interpretation for the quantum-classical transition of wave packet barrier scattering.
Change detection of polarimetric SAR images based on the KummerU Distribution
NASA Astrophysics Data System (ADS)
Chen, Quan; Zou, Pengfei; Li, Zhen; Zhang, Ping
2014-11-01
In the society of PolSAR image segmentation, change detection and classification, the classical Wishart distribution has been used for a long time, but it especially suit to low-resolution SAR image, because in traditional sensors, only a small number of scatterers are present in each resolution cell. With the improving of SAR systems these years, the classical statistical models can therefore be reconsidered for high resolution and polarimetric information contained in the images acquired by these advanced systems. In this study, SAR image segmentation algorithm based on level-set method, added with distance regularized level-set evolution (DRLSE) is performed using Envisat/ASAR single-polarization data and Radarsat-2 polarimetric images, respectively. KummerU heterogeneous clutter model is used in the later to overcome the homogeneous hypothesis at high resolution cell. An enhanced distance regularized level-set evolution (DRLSE-E) is also applied in the later, to ensure accurate computation and stable level-set evolution. Finally, change detection based on four polarimetric Radarsat-2 time series images is carried out at Genhe area of Inner Mongolia Autonomous Region, NorthEastern of China, where a heavy flood disaster occurred during the summer of 2013, result shows the recommend segmentation method can detect the change of watershed effectively.
Phase unwrapping using region-based markov random field model.
Dong, Ying; Ji, Jim
2010-01-01
Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.
Zero-point energy constraint in quasi-classical trajectory calculations.
Xie, Zhen; Bowman, Joel M
2006-04-27
A method to constrain the zero-point energy in quasi-classical trajectory calculations is proposed and applied to the Henon-Heiles system. The main idea of this method is to smoothly eliminate the coupling terms in the Hamiltonian as the energy of any mode falls below a specified value.
NASA Astrophysics Data System (ADS)
Mehn, Dora; Morasso, Carlo; Vanna, Renzo; Schiumarini, Domitilla; Bedoni, Marzia; Ciceri, Fabio; Gramatica, Furio
2014-03-01
The Wilms tumor gene (WT1) is a biomarker overexpressed in more than 90% of acute myeloid leukemia patients. Fast and sensitive detection of the WT1 in blood samples would allow monitoring of the minimal residual disease during clinical remission and would permit early detection of a potential relapse in acute myeloid leukemia. In this work, Surface Enhanced Raman Spectroscopy (SERS) based detection of the WT1 sequence using bifunctional, magnetic core - gold shell nanoparticles is presented. The classical co-precipitation method was applied to generate magnetic nanoparticles which were coated with a gold shell after modification with aminopropyltriethoxy silane and subsequent deposition of gold nanoparticle seeds. Simple hydroquinone based reduction procedure was applied for the shell growing in water based reaction mixture at room temperature. Thiolated ssDNA probes of the WT1 sequence were immobilized as capture oligonucleotides on the gold surface. Malachite green was applied both for testing the amplification performance of the core-shell colloidal SERS substrate and also as label dye of the target DNA sequence. The SERS enhancer efficacy of the core-shell nanomaterial was compared with the efficacy of classical spherical gold particles produced using the conventional citrate reduction method. The core-shell particles were found not only to provide an opportunity for facile separation in a heterogeneous reaction system but also to be superior regarding robustness as SERS enhancers.
Quantum approach to classical statistical mechanics.
Somma, R D; Batista, C D; Ortiz, G
2007-07-20
We present a new approach to study the thermodynamic properties of d-dimensional classical systems by reducing the problem to the computation of ground state properties of a d-dimensional quantum model. This classical-to-quantum mapping allows us to extend the scope of standard optimization methods by unifying them under a general framework. The quantum annealing method is naturally extended to simulate classical systems at finite temperatures. We derive the rates to assure convergence to the optimal thermodynamic state using the adiabatic theorem of quantum mechanics. For simulated and quantum annealing, we obtain the asymptotic rates of T(t) approximately (pN)/(k(B)logt) and gamma(t) approximately (Nt)(-c/N), for the temperature and magnetic field, respectively. Other annealing strategies are also discussed.
Research on transient thermal process of a friction brake during repetitive cycles of operation
NASA Astrophysics Data System (ADS)
Slavchev, Yanko; Dimitrov, Lubomir; Dimitrov, Yavor
2017-12-01
Simplified models are used in the classical engineering analyses of the friction brake heating temperature during repetitive cycles of operation to determine basically the maximum and minimum brake temperatures. The objective of the present work is to broaden and complement the possibilities for research through a model that is based on the classical scheme of the Newton's law of cooling and improves the studies by adding a disturbance function for a corresponding braking process. A general case of braking in non-periodic repetitive mode is considered, for which a piecewise function is defined to apply pulse thermal loads to the system. Cases with rectangular and triangular waveforms are presented. Periodic repetitive braking process is also studied using a periodic rectangular waveform until a steady thermal state is achieved. Different numerical methods such as the Euler's method, the classical fourth order Runge-Kutta (RK4) and the Runge-Kutta-Fehlberg 4-5 (RKF45) are used to solve the non-linear differential equation of the model. The constructed model allows during pre-engineering calculations to be determined effectively the time for reaching the steady thermal state of the brake, to be simulated actual braking modes in vehicles and material handling machines, and to be accounted for the thermal impact when performing fatigue calculations.
New Method to Prepare Mitomycin C Loaded PLA-Nanoparticles with High Drug Entrapment Efficiency
NASA Astrophysics Data System (ADS)
Hou, Zhenqing; Wei, Heng; Wang, Qian; Sun, Qian; Zhou, Chunxiao; Zhan, Chuanming; Tang, Xiaolong; Zhang, Qiqing
2009-07-01
The classical utilized double emulsion solvent diffusion technique for encapsulating water soluble Mitomycin C (MMC) in PLA nanoparticles suffers from low encapsulation efficiency because of the drug rapid partitioning to the external aqueous phase. In this paper, MMC loaded PLA nanoparticles were prepared by a new single emulsion solvent evaporation method, in which soybean phosphatidylcholine (SPC) was employed to improve the liposolubility of MMC by formation of MMC-SPC complex. Four main influential factors based on the results of a single-factor test, namely, PLA molecular weight, ratio of PLA to SPC (wt/wt) and MMC to SPC (wt/wt), volume ratio of oil phase to water phase, were evaluated using an orthogonal design with respect to drug entrapment efficiency. The drug release study was performed in pH 7.2 PBS at 37 °C with drug analysis using UV/vis spectrometer at 365 nm. MMC-PLA particles prepared by classical method were used as comparison. The formulated MMC-SPC-PLA nanoparticles under optimized condition are found to be relatively uniform in size (594 nm) with up to 94.8% of drug entrapment efficiency compared to 6.44 μm of PLA-MMC microparticles with 34.5% of drug entrapment efficiency. The release of MMC shows biphasic with an initial burst effect, followed by a cumulated drug release over 30 days is 50.17% for PLA-MMC-SPC nanoparticles, and 74.1% for PLA-MMC particles. The IR analysis of MMC-SPC complex shows that their high liposolubility may be attributed to some weak physical interaction between MMC and SPC during the formation of the complex. It is concluded that the new method is advantageous in terms of smaller size, lower size distribution, higher encapsulation yield, and longer sustained drug release in comparison to classical method.
Grid occupancy estimation for environment perception based on belief functions and PCR6
NASA Astrophysics Data System (ADS)
Moras, Julien; Dezert, Jean; Pannetier, Benjamin
2015-05-01
In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.
Relational similarity-based model of data part 1: foundations and query systems
NASA Astrophysics Data System (ADS)
Belohlavek, Radim; Vychodil, Vilem
2017-10-01
We present a general rank-aware model of data which supports handling of similarity in relational databases. The model is based on the assumption that in many cases it is desirable to replace equalities on values in data tables by similarity relations expressing degrees to which the values are similar. In this context, we study various phenomena which emerge in the model, including similarity-based queries and similarity-based data dependencies. Central notion in our model is that of a ranked data table over domains with similarities which is our counterpart to the notion of relation on relation scheme from the classical relational model. Compared to other approaches which cover related problems, we do not propose a similarity-based or ranking module on top of the classical relational model. Instead, we generalize the very core of the model by replacing the classical, two-valued logic upon which the classical model is built by a more general logic involving a scale of truth degrees that, in addition to the classical truth degrees 0 and 1, contains intermediate truth degrees. While the classical truth degrees 0 and 1 represent nonequality and equality of values, and subsequently mismatch and match of queries, the intermediate truth degrees in the new model represent similarity of values and partial match of queries. Moreover, the truth functions of many-valued logical connectives in the new model serve to aggregate degrees of similarity. The presented approach is conceptually clean, logically sound, and retains most properties of the classical model while enabling us to employ new types of queries and data dependencies. Most importantly, similarity is not handled in an ad hoc way or by putting a "similarity module" atop the classical model in our approach. Rather, it is consistently viewed as a notion that generalizes and replaces equality in the very core of the relational model. We present fundamentals of the formal model and two equivalent query systems which are analogues of the classical relational algebra and domain relational calculus with range declarations. In the sequel to this paper, we deal with similarity-based dependencies.
Taking-On: A Grounded Theory of Addressing Barriers in Task Completion
ERIC Educational Resources Information Center
Austinson, Julie Ann
2011-01-01
This study of taking-on was conducted using classical grounded theory methodology (Glaser, 1978, 1992, 1998, 2001, 2005; Glaser & Strauss, 1967). Classical grounded theory is inductive, empirical, and naturalistic; it does not utilize manipulation or constrained time frames. Classical grounded theory is a systemic research method used to generate…
Color dithering methods for LEGO-like 3D printing
NASA Astrophysics Data System (ADS)
Sun, Pei-Li; Sie, Yuping
2015-01-01
Color dithering methods for LEGO-like 3D printing are proposed in this study. The first method is work for opaque color brick building. It is a modification of classic error diffusion. Many color primaries can be chosen. However, RGBYKW is recommended as its image quality is good and the number of color primary is limited. For translucent color bricks, multi-layer color building can enhance the image quality significantly. A LUT-based method is proposed to speed the dithering proceeding and make the color distribution even smoother. Simulation results show the proposed multi-layer dithering method can really improve the image quality of LEGO-like 3D printing.
Dillenseger, Jean-Louis; Esneault, Simon; Garnier, Carole
2008-01-01
This paper describes a modeling method of the tissue temperature evolution over time in hyperthermia. More precisely, this approach is used to simulate the hepatocellular carcinoma curative treatment by a percutaneous high intensity ultrasound surgery. The tissue temperature evolution over time is classically described by Pennes' bioheat transfer equation which is generally solved by a finite difference method. In this paper we will present a method where the bioheat transfer equation can be algebraically solved after a Fourier transformation over the space coordinates. The implementation and boundary conditions of this method will be shown and compared with the finite difference method.
Silage review: Using molecular approaches to define the microbial ecology of silage.
McAllister, T A; Dunière, L; Drouin, P; Xu, S; Wang, Y; Munns, K; Zaheer, R
2018-05-01
Ensiling of forages was recognized as a microbial-driven process as early as the late 1800s, when it was associated with the production of "sweet" or "sour" silage. Classical microbiological plating techniques defined the epiphytic microbial populations associated with fresh forage, the pivotal role of lactic acid-producing bacteria in the ensiling process, and the contribution of clostridia, bacilli, yeast, and molds to the spoilage of silage. Many of these classical studies focused on the enumeration and characterization of a limited number of microbial species that could be readily isolated on selective media. Evidence suggested that many of the members of these microbial populations were viable but unculturable, resulting in classical studies underestimating the true microbial diversity associated with ensiling. Polymerase chain reaction-based techniques, including length heterogeneity PCR, terminal RFLP, denaturing gradient gel electrophoresis, and automated ribosomal intergenic spacer analysis, were the first molecular methods used to study silage microbial communities. Further advancements in whole comparative genomic, metagenomic, and metatranscriptomic sequencing have or are in the process of superseding these methods, enabling microbial communities during ensiling to be defined with a degree of detail that is impossible using classical microbiology. These methods have identified new microbial species in silage, as well as characterized shifts in microbial communities with forage type and composition, ensiling method, and in response to aerobic exposure. Strain- and species-specific primers have been used to track the persistence and contribution of silage inoculants to the ensiling process and the role of specific species of yeast and fungi in silage spoilage. Sampling and the methods used to isolate genetic materials for further molecular analysis can have a profound effect on results. Primer selection for PCR amplification and the presence of inhibitors can also lead to biases in the interpretation of sequence data. Bioinformatic analyses are reliant on the integrity and presence of sequence data within established databases and can be subject to low taxonomic resolution. Despite these limitations, advancements in molecular biology are poised to revolutionize our current understanding of the microbial ecology of silage. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuttis, Hans-Georg; Wang, Xiaoxing
Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.
Frequency analysis via the method of moment functionals
NASA Technical Reports Server (NTRS)
Pearson, A. E.; Pan, J. Q.
1990-01-01
Several variants are presented of a linear-in-parameters least squares formulation for determining the transfer function of a stable linear system at specified frequencies given a finite set of Fourier series coefficients calculated from transient nonstationary input-output data. The basis of the technique is Shinbrot's classical method of moment functionals using complex Fourier based modulating functions to convert a differential equation model on a finite time interval into an algebraic equation which depends linearly on frequency-related parameters.
Measuring Viscosities of Gases at Atmospheric Pressure
NASA Technical Reports Server (NTRS)
Singh, Jag J.; Mall, Gerald H.; Hoshang, Chegini
1987-01-01
Variant of general capillary method for measuring viscosities of unknown gases based on use of thermal mass-flowmeter section for direct measurement of pressure drops. In technique, flowmeter serves dual role, providing data for determining volume flow rates and serving as well-characterized capillary-tube section for measurement of differential pressures across it. New method simple, sensitive, and adaptable for absolute or relative viscosity measurements of low-pressure gases. Suited for very complex hydrocarbon mixtures where limitations of classical theory and compositional errors make theoretical calculations less reliable.
Raykov, Tenko; Marcoulides, George A
2016-04-01
The frequently neglected and often misunderstood relationship between classical test theory and item response theory is discussed for the unidimensional case with binary measures and no guessing. It is pointed out that popular item response models can be directly obtained from classical test theory-based models by accounting for the discrete nature of the observed items. Two distinct observational equivalence approaches are outlined that render the item response models from corresponding classical test theory-based models, and can each be used to obtain the former from the latter models. Similarly, classical test theory models can be furnished using the reverse application of either of those approaches from corresponding item response models.
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
NASA Astrophysics Data System (ADS)
Protasov, M.; Gadylshin, K.
2017-07-01
A numerical method is proposed for the calculation of exact frequency-dependent rays when the solution of the Helmholtz equation is known. The properties of frequency-dependent rays are analysed and compared with classical ray theory and with the method of finite-difference modelling for the first time. In this paper, we study the dependence of these rays on the frequency of signals and show the convergence of the exact rays to the classical rays with increasing frequency. A number of numerical experiments demonstrate the distinctive features of exact frequency-dependent rays, in particular, their ability to penetrate into shadow zones that are impenetrable for classical rays.
Laser Pencil Beam Based Techniques for Visualization and Analysis of Interfaces Between Media
NASA Technical Reports Server (NTRS)
Adamovsky, Grigory; Giles, Sammie, Jr.
1998-01-01
Traditional optical methods that include interferometry, Schlieren, and shadowgraphy have been used successfully for visualization and evaluation of various media. Aerodynamics and hydrodynamics are major fields where these methods have been applied. However, these methods have such major drawbacks as a relatively low power density and suppression of the secondary order phenomena. A novel method introduced at NASA Lewis Research Center minimizes disadvantages of the 'classical' methods. The method involves a narrow pencil-like beam that penetrates a medium of interest. The paper describes the laser pencil beam flow visualization methods in detail. Various system configurations are presented. The paper also discusses interfaces between media in general terms and provides examples of interfaces.
SCGICAR: Spatial concatenation based group ICA with reference for fMRI data analysis.
Shi, Yuhu; Zeng, Weiming; Wang, Nizhuan
2017-09-01
With the rapid development of big data, the functional magnetic resonance imaging (fMRI) data analysis of multi-subject is becoming more and more important. As a kind of blind source separation technique, group independent component analysis (GICA) has been widely applied for the multi-subject fMRI data analysis. However, spatial concatenated GICA is rarely used compared with temporal concatenated GICA due to its disadvantages. In this paper, in order to overcome these issues and to consider that the ability of GICA for fMRI data analysis can be improved by adding a priori information, we propose a novel spatial concatenation based GICA with reference (SCGICAR) method to take advantage of the priori information extracted from the group subjects, and then the multi-objective optimization strategy is used to implement this method. Finally, the post-processing means of principal component analysis and anti-reconstruction are used to obtain group spatial component and individual temporal component in the group, respectively. The experimental results show that the proposed SCGICAR method has a better performance on both single-subject and multi-subject fMRI data analysis compared with classical methods. It not only can detect more accurate spatial and temporal component for each subject of the group, but also can obtain a better group component on both temporal and spatial domains. These results demonstrate that the proposed SCGICAR method has its own advantages in comparison with classical methods, and it can better reflect the commonness of subjects in the group. Copyright © 2017 Elsevier B.V. All rights reserved.
Liang, Chao; Qiao, Jun-Qin; Lian, Hong-Zhen
2017-12-15
Reversed-phase liquid chromatography (RPLC) based octanol-water partition coefficient (logP) or distribution coefficient (logD) determination methods were revisited and assessed comprehensively. Classic isocratic and some gradient RPLC methods were conducted and evaluated for neutral, weak acid and basic compounds. Different lipophilicity indexes in logP or logD determination were discussed in detail, including the retention factor logk w corresponding to neat water as mobile phase extrapolated via linear solvent strength (LSS) model from isocratic runs and calculated with software from gradient runs, the chromatographic hydrophobicity index (CHI), apparent gradient capacity factor (k g ') and gradient retention time (t g ). Among the lipophilicity indexes discussed, logk w from whether isocratic or gradient elution methods best correlated with logP or logD. Therefore logk w is recommended as the preferred lipophilicity index for logP or logD determination. logk w easily calculated from methanol gradient runs might be the main candidate to replace logk w calculated from classic isocratic run as the ideal lipophilicity index. These revisited RPLC methods were not applicable for strongly ionized compounds that are hardly ion-suppressed. A previously reported imperfect ion-pair RPLC method was attempted and further explored for studying distribution coefficients (logD) of sulfonic acids that totally ionized in the mobile phase. Notably, experimental logD values of sulfonic acids were given for the first time. The IP-RPLC method provided a distinct way to explore logD values of ionized compounds. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Montalvão, Diogo; Baker, Thomas; Ihracska, Balazs; Aulaqi, Muhammad
2017-01-01
Many applications in Experimental Modal Analysis (EMA) require that the sensors' masses are known. This is because the added mass from sensors will affect the structural mode shapes, and in particular its natural frequencies. EMA requires the measurement of the exciting forces at given coordinates, which is often made using piezoelectric force transducers. In such a case, the live mass of the force transducer, i.e. the mass as 'seen' by the structure in perpendicular directions must be measured somehow, so that compensation methods like mass cancelation can be performed. This however presents a problem on how to obtain an accurate measurement for the live mass. If the system is perfectly calibrated, then a reasonably accurate estimate can be made using a straightforward method available in most classical textbooks based on Newton's second law. However, this is often not the case (for example when the transducer's sensitivity changed over time, when it is unknown or when the connection influences the transmission of the force). In a self-calibrating iterative method, both the live mass and calibration factor are determined, but this paper shows that the problem may be ill-conditioned, producing misleading results if certain conditions are not met. Therefore, a more robust method is presented and discussed in this paper, reducing the ill-conditioning problems and the need to know the calibration factors beforehand. The three methods will be compared and discussed through numerical and experimental examples, showing that classical EMA still is a field of research that deserves the attention from scientists and engineers.
Chai, Cheng-Zhi; Yu, Bo-Yang
2018-06-01
Many classical prescriptions still have superior clinical values nowadays, and their modern studies also have far-reaching scientific research demonstration values. Gegen decoction, a representative prescription for common cold due to wind-cold, can treat primary dysmenorrhea due to cold and dampness, characterized by continuous administration without recurrence. It is not only in accordance with the principle of homotherapy for heteropathy, but also demonstrates the unique feature of traditional Chinese medicine of relieving the primary and secondary symptoms simultaneously. This article aimed to discuss the method and strategy of Gegen decoction study based on the discovery of its novel application in treatment of primary dysmenorrhea and previous research progress of our group. It was assumed that modern medicine and biology studies, as well as chemical research based on biological activity should be used for reference. Principal active ingredients (groups) in Gegen decoction could be accurately and effectively identified, and its possible mechanism in treatment of primary dysmenorrhea could be eventually elucidated as well. Simultaneously, the theoretical and clinical advantages of traditional Chinese medicine were explored in this paper, focusing on the compatibility characteristics of Gegen decoction. The research hypothesis showed the necessity of following the characteristics and advantages of traditional Chinese medicine in the modern research and reflected the importance of basic research based on the clinical efficacy, expecting to provide some ideas and methods for reference for further modern studies of classical prescriptions. Copyright© by the Chinese Pharmaceutical Association.
Hidden Statistics Approach to Quantum Simulations
NASA Technical Reports Server (NTRS)
Zak, Michail
2010-01-01
Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the transitional potential is to provide a jump from a deterministic state to a random state with prescribed probability density. This jump is triggered by blowup instability due to violation of Lipschitz condition generated by the quantum potential. As a result, the dynamics attains quantum properties on a classical scale. The model can be implemented physically as an analog VLSI-based (very-large-scale integration-based) computer, or numerically on a digital computer. This work opens a way of developing fundamentally new algorithms for quantum simulations of exponentially complex problems that expand NASA capabilities in conducting space activities. It has been illustrated that the complexity of simulations of particle interaction can be reduced from an exponential one to a polynomial one.
Simulation of vibrational dephasing of I(2) in solid Kr using the semiclassical Liouville method.
Riga, Jeanne M; Fredj, Erick; Martens, Craig C
2006-02-14
In this paper, we present simulations of the decay of quantum coherence between vibrational states of I(2) in its ground (X) electronic state embedded in a cryogenic Kr matrix. We employ a numerical method based on the semiclassical limit of the quantum Liouville equation, which allows the simulation of the evolution and decay of quantum vibrational coherence using classical trajectories and ensemble averaging. The vibrational level-dependent interaction of the I(2)(X) oscillator with the rare-gas environment is modeled using a recently developed method for constructing state-dependent many-body potentials for quantum vibrations in a many-body classical environment [J. M. Riga, E. Fredj, and C. C. Martens, J. Chem. Phys. 122, 174107 (2005)]. The vibrational dephasing rates gamma(0n) for coherences prepared between the ground vibrational state mid R:0 and excited vibrational state mid R:n are calculated as a function of n and lattice temperature T. Excellent agreement with recent experiments performed by Karavitis et al. [Phys. Chem. Chem. Phys. 7, 791 (2005)] is obtained.
Statistical mechanics based on fractional classical and quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korichi, Z.; Meftah, M. T., E-mail: mewalid@yahoo.com
2014-03-15
The purpose of this work is to study some problems in statistical mechanics based on the fractional classical and quantum mechanics. At first stage we have presented the thermodynamical properties of the classical ideal gas and the system of N classical oscillators. In both cases, the Hamiltonian contains fractional exponents of the phase space (position and momentum). At the second stage, in the context of the fractional quantum mechanics, we have calculated the thermodynamical properties for the black body radiation, studied the Bose-Einstein statistics with the related problem of the condensation and the Fermi-Dirac statistics.
Improvements to surrogate data methods for nonstationary time series.
Lucio, J H; Valdés, R; Rodríguez, L R
2012-05-01
The method of surrogate data has been extensively applied to hypothesis testing of system linearity, when only one realization of the system, a time series, is known. Normally, surrogate data should preserve the linear stochastic structure and the amplitude distribution of the original series. Classical surrogate data methods (such as random permutation, amplitude adjusted Fourier transform, or iterative amplitude adjusted Fourier transform) are successful at preserving one or both of these features in stationary cases. However, they always produce stationary surrogates, hence existing nonstationarity could be interpreted as dynamic nonlinearity. Certain modifications have been proposed that additionally preserve some nonstationarity, at the expense of reproducing a great deal of nonlinearity. However, even those methods generally fail to preserve the trend (i.e., global nonstationarity in the mean) of the original series. This is the case of time series with unit roots in their autoregressive structure. Additionally, those methods, based on Fourier transform, either need first and last values in the original series to match, or they need to select a piece of the original series with matching ends. These conditions are often inapplicable and the resulting surrogates are adversely affected by the well-known artefact problem. In this study, we propose a simple technique that, applied within existing Fourier-transform-based methods, generates surrogate data that jointly preserve the aforementioned characteristics of the original series, including (even strong) trends. Moreover, our technique avoids the negative effects of end mismatch. Several artificial and real, stationary and nonstationary, linear and nonlinear time series are examined, in order to demonstrate the advantages of the methods. Corresponding surrogate data are produced with the classical and with the proposed methods, and the results are compared.
NASA Astrophysics Data System (ADS)
Akhoondzadeh, M.
2013-09-01
Anomaly detection is extremely important for forecasting the date, location and magnitude of an impending earthquake. In this paper, an Adaptive Network-based Fuzzy Inference System (ANFIS) has been proposed to detect the thermal and Total Electron Content (TEC) anomalies around the time of the Varzeghan, Iran, (Mw = 6.4) earthquake jolted in 11 August 2012 NW Iran. ANFIS is the famous hybrid neuro-fuzzy network for modeling the non-linear complex systems. In this study, also the detected thermal and TEC anomalies using the proposed method are compared to the results dealing with the observed anomalies by applying the classical and intelligent methods including Interquartile, Auto-Regressive Integrated Moving Average (ARIMA), Artificial Neural Network (ANN) and Support Vector Machine (SVM) methods. The duration of the dataset which is comprised from Aqua-MODIS Land Surface Temperature (LST) night-time snapshot images and also Global Ionospheric Maps (GIM), is 62 days. It can be shown that, if the difference between the predicted value using the ANFIS method and the observed value, exceeds the pre-defined threshold value, then the observed precursor value in the absence of non seismic effective parameters could be regarded as precursory anomaly. For two precursors of LST and TEC, the ANFIS method shows very good agreement with the other implemented classical and intelligent methods and this indicates that ANFIS is capable of detecting earthquake anomalies. The applied methods detected anomalous occurrences 1 and 2 days before the earthquake. This paper indicates that the detection of the thermal and TEC anomalies derive their credibility from the overall efficiencies and potentialities of the five integrated methods.
[Research and development strategies in classical herbal formulae].
Chen, Chang; Cheng, Jin-Tang; Liu, An
2017-05-01
As an outstanding representative of traditional Chinese medicine prescription, classical herbal formulae are the essence of traditional Chinese medicine great treasure. To support the development of classical herbal formulae, the state and relevant administrative departments have successively promulgated the relevant encouraged policies.But some key issues of classic herbal formulae in the development process have not reached a unified consensus and standard, and these problems were discussed in depth here.The authors discussed the registration requirements of classical herbal formulae, proposed the screening specific indicators of classical herbal formulae, determination basis of prescription and dosage,screening method of production process, and the basic principle of clinical localization, in order to bring out valuable opinions and provide a reference for classical herbal formulae development and policy formulation. Copyright© by the Chinese Pharmaceutical Association.
Application of an IRT Polytomous Model for Measuring Health Related Quality of Life
ERIC Educational Resources Information Center
Tejada, Antonio J. Rojas; Rojas, Oscar M. Lozano
2005-01-01
Background: The Item Response Theory (IRT) has advantages for measuring Health Related Quality of Life (HRQOL) as opposed to the Classical Tests Theory (CTT). Objectives: To present the results of the application of a polytomous model based on IRT, specifically, the Rating Scale Model (RSM), to measure HRQOL with the EORTC QLQ-C30. Methods: 103…
On the derivation of linear irreversible thermodynamics for classical fluids
Theodosopulu, M.; Grecos, A.; Prigogine, I.
1978-01-01
We consider the microscopic derivation of the linearized hydrodynamic equations for an arbitrary simple fluid. Our discussion is based on the concept of hydrodynamical modes, and use is made of the ideas and methods of the theory of subdynamics. We also show that this analysis leads to the Gibbs relation for the entropy of the system. PMID:16592516
ERIC Educational Resources Information Center
Umek, Lan; Aristovnik, Aleksander; Tomaževic, Nina; Keržic, Damijana
2015-01-01
The use of e-learning techniques in higher education is becoming ever more frequent. In some institutions, e-learning has completely replaced the traditional teaching methods, while in others it supplements classical courses. The paper presents a study conducted in a member institution of the University of Ljubljana that provides public…
Sequential Geoacoustic Filtering and Geoacoustic Inversion
2015-09-30
and online algorithms. We show here that CS obtains higher resolution than MVDR, even in scenarios, which favor classical high-resolution methods...windows actually performs better than conventional beamforming and MVDR/ MUSIC (see Figs. 1-2). Compressive geoacoustic inversion Geoacoustic...histograms based on 100 Monte Carlo simulations, and c)(CS, exhaustive-search, CBF, MVDR, and MUSIC performance versus SNR. The true source positions
ERIC Educational Resources Information Center
Çokluk, Ömay; Gül, Emrah; Dogan-Gül, Çilem
2016-01-01
The study aims to examine whether differential item function is displayed in three different test forms that have item orders of random and sequential versions (easy-to-hard and hard-to-easy), based on Classical Test Theory (CTT) and Item Response Theory (IRT) methods and bearing item difficulty levels in mind. In the correlational research, the…
Kamoun, Choumouss; Payen, Thibaut; Hua-Van, Aurélie; Filée, Jonathan
2013-10-11
Insertion Sequences (ISs) and their non-autonomous derivatives (MITEs) are important components of prokaryotic genomes inducing duplication, deletion, rearrangement or lateral gene transfers. Although ISs and MITEs are relatively simple and basic genetic elements, their detection remains a difficult task due to their remarkable sequence diversity. With the advent of high-throughput genome and metagenome sequencing technologies, the development of fast, reliable and sensitive methods of ISs and MITEs detection become an important challenge. So far, almost all studies dealing with prokaryotic transposons have used classical BLAST-based detection methods against reference libraries. Here we introduce alternative methods of detection either taking advantages of the structural properties of the elements (de novo methods) or using an additional library-based method using profile HMM searches. In this study, we have developed three different work flows dedicated to ISs and MITEs detection: the first two use de novo methods detecting either repeated sequences or presence of Inverted Repeats; the third one use 28 in-house transposase alignment profiles with HMM search methods. We have compared the respective performances of each method using a reference dataset of 30 archaeal and 30 bacterial genomes in addition to simulated and real metagenomes. Compared to a BLAST-based method using ISFinder as library, de novo methods significantly improve ISs and MITEs detection. For example, in the 30 archaeal genomes, we discovered 30 new elements (+20%) in addition to the 141 multi-copies elements already detected by the BLAST approach. Many of the new elements correspond to ISs belonging to unknown or highly divergent families. The total number of MITEs has even doubled with the discovery of elements displaying very limited sequence similarities with their respective autonomous partners (mainly in the Inverted Repeats of the elements). Concerning metagenomes, with the exception of short reads data (<300 bp) for which both techniques seem equally limited, profile HMM searches considerably ameliorate the detection of transposase encoding genes (up to +50%) generating low level of false positives compare to BLAST-based methods. Compared to classical BLAST-based methods, the sensitivity of de novo and profile HMM methods developed in this study allow a better and more reliable detection of transposons in prokaryotic genomes and metagenomes. We believed that future studies implying ISs and MITEs identification in genomic data should combine at least one de novo and one library-based method, with optimal results obtained by running the two de novo methods in addition to a library-based search. For metagenomic data, profile HMM search should be favored, a BLAST-based step is only useful to the final annotation into groups and families.
2011-01-01
Background Gene regulatory networks play essential roles in living organisms to control growth, keep internal metabolism running and respond to external environmental changes. Understanding the connections and the activity levels of regulators is important for the research of gene regulatory networks. While relevance score based algorithms that reconstruct gene regulatory networks from transcriptome data can infer genome-wide gene regulatory networks, they are unfortunately prone to false positive results. Transcription factor activities (TFAs) quantitatively reflect the ability of the transcription factor to regulate target genes. However, classic relevance score based gene regulatory network reconstruction algorithms use models do not include the TFA layer, thus missing a key regulatory element. Results This work integrates TFA prediction algorithms with relevance score based network reconstruction algorithms to reconstruct gene regulatory networks with improved accuracy over classic relevance score based algorithms. This method is called Gene expression and Transcription factor activity based Relevance Network (GTRNetwork). Different combinations of TFA prediction algorithms and relevance score functions have been applied to find the most efficient combination. When the integrated GTRNetwork method was applied to E. coli data, the reconstructed genome-wide gene regulatory network predicted 381 new regulatory links. This reconstructed gene regulatory network including the predicted new regulatory links show promising biological significances. Many of the new links are verified by known TF binding site information, and many other links can be verified from the literature and databases such as EcoCyc. The reconstructed gene regulatory network is applied to a recent transcriptome analysis of E. coli during isobutanol stress. In addition to the 16 significantly changed TFAs detected in the original paper, another 7 significantly changed TFAs have been detected by using our reconstructed network. Conclusions The GTRNetwork algorithm introduces the hidden layer TFA into classic relevance score-based gene regulatory network reconstruction processes. Integrating the TFA biological information with regulatory network reconstruction algorithms significantly improves both detection of new links and reduces that rate of false positives. The application of GTRNetwork on E. coli gene transcriptome data gives a set of potential regulatory links with promising biological significance for isobutanol stress and other conditions. PMID:21668997
Determination of new retention indices for quick identification of essential oils compounds.
Hérent, Marie-France; De Bie, Véronique; Tilquin, Bernard
2007-02-19
The classical methods of chromatographic identification of compounds were based on calculation of retention indices by using different stationary phases. The aim of the work was to differentiate essential oils extracted from different plant species by identification of some of their major compounds. The method of identification was based on the calculation of new retention indices of essential oils compounds fractionated on a polar chromatographic column with temperature programming system. Similar chromatograms have been obtained on the same column for one plant family with two different temperature gradients allowing the rapid identification of essential oils of different species, sub-species or chemotypes of Citrus, Mentha and Thymus.
An Aural Learning Project: Assimilating Jazz Education Methods for Traditional Applied Pedagogy
ERIC Educational Resources Information Center
Gamso, Nancy M.
2011-01-01
The Aural Learning Project (ALP) was developed to incorporate jazz method components into the author's classical practice and her applied woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical applied curriculum. The components of the…
An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring
NASA Astrophysics Data System (ADS)
Chen, R.; Sun, Y. Y.; Lei, Y.
2017-12-01
With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and dependence on the initial values through math formulas. At last this paper conducts experiments using real aviation data, and proves that the new model can effectively solve bottlenecks of the classical method in a certain degree, that is, this paper provides a new idea and solution for faster and more efficient environmental monitoring.
Recent Advances and Perspectives on Nonadiabatic Mixed Quantum-Classical Dynamics.
Crespo-Otero, Rachel; Barbatti, Mario
2018-05-16
Nonadiabatic mixed quantum-classical (NA-MQC) dynamics methods form a class of computational theoretical approaches in quantum chemistry tailored to investigate the time evolution of nonadiabatic phenomena in molecules and supramolecular assemblies. NA-MQC is characterized by a partition of the molecular system into two subsystems: one to be treated quantum mechanically (usually but not restricted to electrons) and another to be dealt with classically (nuclei). The two subsystems are connected through nonadiabatic couplings terms to enforce self-consistency. A local approximation underlies the classical subsystem, implying that direct dynamics can be simulated, without needing precomputed potential energy surfaces. The NA-MQC split allows reducing computational costs, enabling the treatment of realistic molecular systems in diverse fields. Starting from the three most well-established methods-mean-field Ehrenfest, trajectory surface hopping, and multiple spawning-this review focuses on the NA-MQC dynamics methods and programs developed in the last 10 years. It stresses the relations between approaches and their domains of application. The electronic structure methods most commonly used together with NA-MQC dynamics are reviewed as well. The accuracy and precision of NA-MQC simulations are critically discussed, and general guidelines to choose an adequate method for each application are delivered.
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.
2006-01-01
The radial return and Mendelson methods for integrating the equations of classical plasticity, which appear independently in the literature, are shown to be identical. Both methods are presented in detail as are the specifics of their algorithmic implementation. Results illustrate the methods' equivalence across a range of conditions and address the question of when the methods require iteration in order for the plastic state to remain on the yield surface. FORTRAN code implementations of the radial return and Mendelson methods are provided in the appendix.
Mora Osorio, Camilo Andrés; González Barrios, Andrés Fernando
2016-12-07
Calculation of the Gibbs free energy changes of biological molecules at the oil-water interface is commonly performed with Molecular Dynamics simulations (MD). It is a process that could be performed repeatedly in order to find some molecules of high stability in this medium. Here, an alternative method of calculation has been proposed: a group contribution method (GCM) for peptides based on MD of the twenty classic amino acids to obtain free energy change during the insertion of any peptide chain in water-dodecane interfaces. Multiple MD of the twenty classic amino acids located at the interface of rectangular simulation boxes with a dodecane-water medium were performed. A GCM to calculate the free energy of entire peptides is then proposed. The method uses the summation of the Gibbs free energy of each amino acid adjusted in function of its presence or absence in the chain as well as its hydrophobic characteristics. Validation of the equation was performed with twenty-one peptides all simulated using MD in dodecane-water rectangular boxes in previous work, obtaining an average relative error of 16%.
A thermodynamically consistent discontinuous Galerkin formulation for interface separation
Versino, Daniele; Mourad, Hashem M.; Dávila, Carlos G.; ...
2015-07-31
Our paper describes the formulation of an interface damage model, based on the discontinuous Galerkin (DG) method, for the simulation of failure and crack propagation in laminated structures. The DG formulation avoids common difficulties associated with cohesive elements. Specifically, it does not introduce any artificial interfacial compliance and, in explicit dynamic analysis, it leads to a stable time increment size which is unaffected by the presence of stiff massless interfaces. This proposed method is implemented in a finite element setting. Convergence and accuracy are demonstrated in Mode I and mixed-mode delamination in both static and dynamic analyses. Significantly, numerical resultsmore » obtained using the proposed interface model are found to be independent of the value of the penalty factor that characterizes the DG formulation. By contrast, numerical results obtained using a classical cohesive method are found to be dependent on the cohesive penalty stiffnesses. The proposed approach is shown to yield more accurate predictions pertaining to crack propagation under mixed-mode fracture because of the advantage. Furthermore, in explicit dynamic analysis, the stable time increment size calculated with the proposed method is found to be an order of magnitude larger than the maximum allowable value for classical cohesive elements.« less
Ma, Xue Yan; Zheng, Bing Qing; Xu, Pao; Xu, Liang; Hua, Dan; Yuan, Xin Hua; Gu, Ruo Bo
2018-01-01
The basal media M199 or MEM was utilized in the classical method of vitro culture of glochidia where 1–5% CO2 was required to maintain stable physiological pH for completion of non-parasitic metamorphosis. The classical method encounters a great challenge to those glochidia which undergo development of visceral tissue but significantly increase in size during metamorphosis. The improved in vitro culture techniques and classical methods were firstly compared for non-parasitic metamorphosis and development of glochidia in pink heelsplitter. Based on the improved method, the optimal vitro culture media was further selected from 14 plasmas or sera, realizing the non-parasitic metamorphosis of axe-head glochidia for the first time. The results showed that addition of different plasma (serum) had significant effect on glochidial metamorphosis in pink heelsplitter. Only glochidia in the skewband grunt and red drum groups could complete metamorphosis, the metamorphosis rate in skewband grunt was 93.3±3.1% at 24±0.5°C, significantly higher than in marine and desalinated red drum. Heat-inactivated treatment on the plasma of yellow catfish and Barbus capito had significant effect on glochidia survival and shell growth. The metamorphosis rate also varied among different gravid period, and generally decreased with gravid time. Further comparison of free amino acid and fatty acid indicated that the taurine of high concentration was the only amino acid that might promote the rapid growth of glochidial shell, and the lack of adequate DPA and DHA might be an important reason leading to the abnormal foot and visceral development. Combined with our results of artificial selection of host fish, we tentatively established the mechanism of its host specialists in pink heelsplitter for the first time. This is the first report on non-parasite metamorphosis of axe-head glochidia based on our improved vitro culture method, which should provide important reference to fundamental theory research of glochidia metamorphosis and also benefit for better understand of mechanism of host specialists and generalists of Unionidae species. PMID:29447194
Modifications of the PCPT method for HJB equations
NASA Astrophysics Data System (ADS)
Kossaczký, I.; Ehrhardt, M.; Günther, M.
2016-10-01
In this paper we will revisit the modification of the piecewise constant policy timestepping (PCPT) method for solving Hamilton-Jacobi-Bellman (HJB) equations. This modification is called piecewise predicted policy timestepping (PPPT) method and if properly used, it may be significantly faster. We will quickly recapitulate the algorithms of PCPT, PPPT methods and of the classical implicit method and apply them on a passport option pricing problem with non-standard payoff. We will present modifications needed to solve this problem effectively with the PPPT method and compare the performance with the PCPT method and the classical implicit method.
Robust Methods for Moderation Analysis with a Two-Level Regression Model.
Yang, Miao; Yuan, Ke-Hai
2016-01-01
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
NASA Astrophysics Data System (ADS)
Errico, F.; Ichchou, M.; De Rosa, S.; Bareille, O.; Franco, F.
2018-06-01
The stochastic response of periodic flat and axial-symmetric structures, subjected to random and spatially-correlated loads, is here analysed through an approach based on the combination of a wave finite element and a transfer matrix method. Although giving a lower computational cost, the present approach keeps the same accuracy of classic finite element methods. When dealing with homogeneous structures, the accuracy is also extended to higher frequencies, without increasing the time of calculation. Depending on the complexity of the structure and the frequency range, the computational cost can be reduced more than two orders of magnitude. The presented methodology is validated both for simple and complex structural shapes, under deterministic and random loads.
Theoretical and experimental physical methods of neutron-capture therapy
NASA Astrophysics Data System (ADS)
Borisov, G. I.
2011-09-01
This review is based to a substantial degree on our priority developments and research at the IR-8 reactor of the Russian Research Centre Kurchatov Institute. New theoretical and experimental methods of neutron-capture therapy are developed and applied in practice; these are: A general analytical and semi-empiric theory of neutron-capture therapy (NCT) based on classical neutron physics and its main sections (elementary theories of moderation, diffuse, reflection, and absorption of neutrons) rather than on methods of mathematical simulation. The theory is, first of all, intended for practical application by physicists, engineers, biologists, and physicians. This theory can be mastered by anyone with a higher education of almost any kind and minimal experience in operating a personal computer.
The interdependence between screening methods and screening libraries.
Shelat, Anang A; Guy, R Kiplin
2007-06-01
The most common methods for discovery of chemical compounds capable of manipulating biological function involves some form of screening. The success of such screens is highly dependent on the chemical materials - commonly referred to as libraries - that are assayed. Classic methods for the design of screening libraries have depended on knowledge of target structure and relevant pharmacophores for target focus, and on simple count-based measures to assess other properties. The recent proliferation of two novel screening paradigms, structure-based screening and high-content screening, prompts a profound rethink about the ideal composition of small-molecule screening libraries. We suggest that currently utilized libraries are not optimal for addressing new targets by high-throughput screening, or complex phenotypes by high-content screening.
A consensus algorithm for approximate string matching and its application to QRS complex detection
NASA Astrophysics Data System (ADS)
Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.
2016-08-01
In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.
Classical and Quantum-Mechanical State Reconstruction
ERIC Educational Resources Information Center
Khanna, F. C.; Mello, P. A.; Revzen, M.
2012-01-01
The aim of this paper is to present the subject of state reconstruction in classical and in quantum physics, a subject that deals with the experimentally acquired information that allows the determination of the physical state of a system. Our first purpose is to explain a method for retrieving a classical state in phase space, similar to that…
Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs
NASA Astrophysics Data System (ADS)
Salimi, S.; Jafarizadeh, M. A.
2009-06-01
In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.
Simulation of 2D rarefied gas flows based on the numerical solution of the Boltzmann equation
NASA Astrophysics Data System (ADS)
Poleshkin, Sergey O.; Malkov, Ewgenij A.; Kudryavtsev, Alexey N.; Shershnev, Anton A.; Bondar, Yevgeniy A.; Kohanchik, A. A.
2017-10-01
There are various methods for calculating rarefied gas flows, in particular, statistical methods and deterministic methods based on the finite-difference solutions of the Boltzmann nonlinear kinetic equation and on the solutions of model kinetic equations. There is no universal method; each has its disadvantages in terms of efficiency or accuracy. The choice of the method depends on the problem to be solved and on parameters of calculated flows. Qualitative theoretical arguments help to determine the range of parameters of effectively solved problems for each method; however, it is advisable to perform comparative tests of calculations of the classical problems performed by different methods and with different parameters to have quantitative confirmation of this reasoning. The paper provides the results of the calculations performed by the authors with the help of the Direct Simulation Monte Carlo method and finite-difference methods of solving the Boltzmann equation and model kinetic equations. Based on this comparison, conclusions are made on selecting a particular method for flow simulations in various ranges of flow parameters.
Approximation of the ruin probability using the scaled Laplace transform inversion
Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak
2015-01-01
The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796
[Study on expression styles of meridian diseases in the Internal Classic].
Jia-Jie; Zhao, Jing-sheng
2007-01-01
To probe expression styles of meridian diseases in the Internal Classic. Expression styles for meridian diseases in the Internal Classic were divided by using literature study methods. Expression styles of meridian diseases in the Internal Classic include the 4 types, i. e. twelve meridians, the six channels on the foot, indications of acupoints, and diseases of zang and fu organs. The recognition of later generations on the meridians diseases in the Lingshu Chanels has a certain history limitation.
A quantum-classical theory with nonlinear and stochastic dynamics
NASA Astrophysics Data System (ADS)
Burić, N.; Popović, D. B.; Radonjić, M.; Prvanović, S.
2014-12-01
The method of constrained dynamical systems on the quantum-classical phase space is utilized to develop a theory of quantum-classical hybrid systems. Effects of the classical degrees of freedom on the quantum part are modeled using an appropriate constraint, and the interaction also includes the effects of neglected degrees of freedom. Dynamical law of the theory is given in terms of nonlinear stochastic differential equations with Hamiltonian and gradient terms. The theory provides a successful dynamical description of the collapse during quantum measurement.
Moreno, Inmaculada; Cicinelli, Ettore; Garcia-Grau, Iolanda; Gonzalez-Monfort, Marta; Bau, Davide; Vilella, Felipe; De Ziegler, Dominique; Resta, Leonardo; Valbuena, Diana; Simon, Carlos
2018-06-01
Chronic endometritis is a persistent inflammation of the endometrial mucosa caused by bacterial pathogens such as Enterobacteriaceae, Enterococcus, Streptococcus, Staphylococcus, Mycoplasma, and Ureaplasma. Although chronic endometritis can be asymptomatic, it is found in up to 40% of infertile patients and is responsible for repeated implantation failure and recurrent miscarriage. Diagnosis of chronic endometritis is based on hysteroscopy of the uterine cavity, endometrial biopsy with plasma cells being identified histologically, while specific treatment is determined based on microbial culture. However, not all microorganisms implicated are easily or readily culturable needing a turnaround time of up to 1 week. We sought to develop a molecular diagnostic tool for chronic endometritis based on real-time polymerase chain reaction equivalent to using the 3 classic methods together, overcoming the bias of using any of them alone. Endometrial samples from patients assessed for chronic endometritis (n = 113) using at least 1 or several conventional diagnostic methods namely histology, hysteroscopy, and/or microbial culture, were blindly evaluated by real-time polymerase chain reaction for the presence of 9 chronic endometritis pathogens: Chlamydia trachomatis, Enterococcus, Escherichia coli, Gardnerella vaginalis, Klebsiella pneumoniae, Mycoplasma hominis, Neisseria gonorrhoeae, Staphylococcus, and Streptococcus. The sensitivity and specificity of the molecular analysis vs the classic diagnostic techniques were compared in the 65 patients assessed by all 3 recognized classic methods. The molecular method showed concordant results with histological diagnosis in 30 samples (14 double positive and 16 double negative) with a matching accuracy of 46.15%. Concordance of molecular and hysteroscopic diagnosis was observed in 38 samples (37 double positive and 1 double negative), with an accuracy of 58.46%. When the molecular method was compared to microbial culture, concordance was present in 37 samples (22 double positive and 15 double negative), a matching rate of 56.92%. When cases of potential contamination and/or noncultivable bacteria were considered, the accuracy increased to 66.15%. Of these 65 patients, only 27 patients had consistent histological + hysteroscopic diagnosis, revealing 58.64% of nonconcordant results. Only 13 of 65 patients (20%) had consistent histology + hysteroscopy + microbial culture results. In these cases, the molecular microbiology matched in 10 cases showing a diagnostic accuracy of 76.92%. Interestingly, the molecular microbiology confirmed over half of the isolated pathogens and provided additional detection of nonculturable microorganisms. These results were confirmed by the microbiome assessed by next-generation sequencing. In the endometrial samples with concordant histology + hysteroscopy + microbial culture results, the molecular microbiology diagnosis demonstrates 75% sensitivity, 100% specificity, 100% positive and 25% negative predictive values, and 0% false-positive and 25% false-negative rates. The molecular microbiology method describe herein is a fast and inexpensive diagnostic tool that allows for the identification of culturable and nonculturable endometrial pathogens associated with chronic endometritis. The results obtained were similar to all 3 classic diagnostic methods together with a degree of concordance of 76.92% providing an opportunity to improve the clinical management of infertile patients with a risk of experiencing this ghost endometrial pathology. Copyright © 2018 Elsevier Inc. All rights reserved.
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-07
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007)]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H(5)(+) complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H(5)(+) complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011)] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.
NASA Astrophysics Data System (ADS)
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-01
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007), 10.1063/1.2430711]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H_5^+ complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H_5^+ complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011), 10.1063/1.3587246] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donangelo, R.J.
An integral representation for the classical limit of the quantum mechanical S-matrix is developed and applied to heavy-ion Coulomb excitation and Coulomb-nuclear interference. The method combines the quantum principle of superposition with exact classical dynamics to describe the projectile-target system. A detailed consideration of the classical trajectories and of the dimensionless parameters that characterize the system is carried out. The results are compared, where possible, to exact quantum mechanical calculations and to conventional semiclassical calculations. It is found that in the case of backscattering the classical limit S-matrix method is able to almost exactly reproduce the quantum-mechanical S-matrix elements, andmore » therefore the transition probabilities, even for projectiles as light as protons. The results also suggest that this approach should be a better approximation for heavy-ion multiple Coulomb excitation than earlier semiclassical methods, due to a more accurate description of the classical orbits in the electromagnetic field of the target nucleus. Calculations using this method indicate that the rotational excitation probabilities in the Coulomb-nuclear interference region should be very sensitive to the details of the potential at the surface of the nucleus, suggesting that heavy-ion rotational excitation could constitute a sensitive probe of the nuclear potential in this region. The application to other problems as well as the present limits of applicability of the formalism are also discussed.« less
A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image
NASA Astrophysics Data System (ADS)
Barat, Christian; Phlypo, Ronald
2010-12-01
We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.
A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method
NASA Astrophysics Data System (ADS)
Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang
2016-01-01
Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.
A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method
Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang
2016-01-01
Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194
Guided SAR image despeckling with probabilistic non local weights
NASA Astrophysics Data System (ADS)
Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny
2017-12-01
SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.
Tug-of-war between classical and multicenter bonds in H-(Be)n-H species
NASA Astrophysics Data System (ADS)
Lundell, Katie A.; Boldyrev, Alexander I.
2018-05-01
Quantum chemical calculations were performed for beryllium homocatenated compounds [H-(Be)n-H]. Global minimum structures were found using machine searches (Coalescence Kick method) with density functional theory. Chemical bonding analysis was performed with the Adaptive Natural Density Partitioning method. It was found that H-(Be)2-H and H-(Be)3-H clusters are linear with classical two-center two-electron bonds, while for n > 3, three-dimensional structures are more stable with multicenter bonding. Thus, at n = 4, multicenter bonding wins the tug-of-war vs. the classical bonding.
On the semi-classical limit of scalar products of the XXZ spin chain
NASA Astrophysics Data System (ADS)
Jiang, Yunfeng; Brunekreef, Joren
2017-03-01
We study the scalar products between Bethe states in the XXZ spin chain with anisotropy |Δ| > 1 in the semi-classical limit where the length of the spin chain and the number of magnons tend to infinity with their ratio kept finite and fixed. Our method is a natural yet non-trivial generalization of similar methods developed for the XXX spin chain. The final result can be written in a compact form as a contour integral in terms of Faddeev's quantum dilogarithm function, which in the isotropic limit reduces to the classical dilogarithm function.
Fluctuating local field method probed for a description of small classical correlated lattices
NASA Astrophysics Data System (ADS)
Rubtsov, Alexey N.
2018-05-01
Thermal-equilibrated finite classical lattices are considered as a minimal model of the systems showing an interplay between low-energy collective fluctuations and single-site degrees of freedom. Standard local field approach, as well as classical limit of the bosonic DMFT method, do not provide a satisfactory description of Ising and Heisenberg small lattices subjected to an external polarizing field. We show that a dramatic improvement can be achieved within a simple approach, in which the local field appears to be a fluctuating quantity related to the low-energy degree(s) of freedom.
Three-step semiquantum secure direct communication protocol
NASA Astrophysics Data System (ADS)
Zou, XiangFu; Qiu, DaoWen
2014-09-01
Quantum secure direct communication is the direct communication of secret messages without need for establishing a shared secret key first. In the existing schemes, quantum secure direct communication is possible only when both parties are quantum. In this paper, we construct a three-step semiquantum secure direct communication (SQSDC) protocol based on single photon sources in which the sender Alice is classical. In a semiquantum protocol, a person is termed classical if he (she) can measure, prepare and send quantum states only with the fixed orthogonal quantum basis {|0>, |1>}. The security of the proposed SQSDC protocol is guaranteed by the complete robustness of semiquantum key distribution protocols and the unconditional security of classical one-time pad encryption. Therefore, the proposed SQSDC protocol is also completely robust. Complete robustness indicates that nonzero information acquired by an eavesdropper Eve on the secret message implies the nonzero probability that the legitimate participants can find errors on the bits tested by this protocol. In the proposed protocol, we suggest a method to check Eves disturbing in the doves returning phase such that Alice does not need to announce publicly any position or their coded bits value after the photons transmission is completed. Moreover, the proposed SQSDC protocol can be implemented with the existing techniques. Compared with many quantum secure direct communication protocols, the proposed SQSDC protocol has two merits: firstly the sender only needs classical capabilities; secondly to check Eves disturbing after the transmission of quantum states, no additional classical information is needed.
Fornari, Alexandre; Carboni, Cristiane
2018-02-13
Pelvic floor physiotherapy has been utilized extensively over the past decades for the treatment of pelvic floor dysfunctions. The aim of this study was to identify and characterize the most frequently cited articles on pelvic floor physiotherapy published in the last 30 years. A PubMed search of all articles published between 1983 and 2013 was performed. Articles with more than 100 citations were identified as "classic," and were further analyzed based on author names, year of publication, journal of publication, subject, study design, country of research, and number of citations. In 2017, a new search for papers on pelvic floor physiotherapy was conducted using the same methods to compare them with the 2013 data. Of 1,285 articles published between 1983 and 2013, only 20 articles were cited more than 100 times. Among them, we found 12 randomized clinical trials (RCTs) and only 4 reviews. The most common topics among the classic articles were behavior therapy, pelvic floor muscle training (PFMT), biofeedback-assisted PFMT, and neuromuscular electrical stimulation. In 2017, we found 1,745 papers containing the term "pelvic floor physiotherapy," indicating an increase of around 35% in 4 years. Although there is a fast-growing number of publications, we still have few classic papers on pelvic floor physiotherapy, concentrated in a few research centers. However, the large number of RCTs shows that these papers have a high scientific level, confirming that they can be classified as classic papers.
Maruyama, Hiroki; Miyata, Kaori; Mikame, Mariko; Taguchi, Atsumi; Guili, Chu; Shimura, Masaru; Murayama, Kei; Inoue, Takeshi; Yamamoto, Saori; Sugimura, Koichiro; Tamita, Koichi; Kawasaki, Toshihiro; Kajihara, Jun; Onishi, Akifumi; Sugiyama, Hitoshi; Sakai, Teiko; Murata, Ichijiro; Oda, Takamasa; Toyoda, Shigeru; Hanawa, Kenichiro; Fujimura, Takeo; Ura, Shigehisa; Matsumura, Mimiko; Takano, Hideki; Yamashita, Satoshi; Matsukura, Gaku; Tazawa, Ryushi; Shiga, Tsuyoshi; Ebato, Mio; Satoh, Hiroshi; Ishii, Satoshi
2018-03-15
PurposePlasma globotriaosylsphingosine (lyso-Gb3) is a promising secondary screening biomarker for Fabry disease. Here, we examined its applicability as a primary screening biomarker for classic and late-onset Fabry disease in males and females.MethodsBetween 1 July 2014 and 31 December 2015, we screened 2,360 patients (1,324 males) referred from 169 Japanese specialty clinics (cardiology, nephrology, neurology, and pediatrics), based on clinical symptoms suggestive of Fabry disease. We used the plasma lyso-Gb3 concentration, α-galactosidase A (α-Gal A) activity, and analysis of the α-Gal A gene (GLA) for primary and secondary screens, respectively.ResultsOf 8 males with elevated lyso-Gb3 levels (≥2.0 ng ml -1 ) and low α-Gal A activity (≤4.0 nmol h -1 ml -1 ), 7 presented a GLA mutation (2 classic and 5 late-onset). Of 15 females with elevated lyso-Gb3, 7 displayed low α-Gal A activity (5 with GLA mutations; 4 classic and 1 late-onset) and 8 exhibited normal α-Gal A activity (1 with a classic GLA mutation and 3 with genetic variants of uncertain significance).ConclusionPlasma lyso-Gb3 is a potential primary screening biomarker for classic and late-onset Fabry disease probands.Genet Med advance online publication, 15 March 2018; doi:10.1038/gim.2018.31.
Monastic incorporation of classical botanic medicines into the Renaissance pharmacopeia.
Petrucelli, R J
1994-01-01
Ancient Greek physicians believed that health resulted from a balance of natural forces. Many, including Dioscorides, made compilations of plants and medicines derived from them, giving prominence to diuretics, cathartics and emetics. During the Roman Empire, although Greek physicians were highly valued, the Roman matron performed many medical functions and magic and astrology were increasingly used. In Judaic and later Christian societies disease was equated with divine disfavor. After the fall of Rome, the classical Greek medical texts were mainly preserved in Latin translation by the Benedictine monasteries, which were based around a patient infirmary, a herb garden and a library. Local plants were often substituted for the classical ones, however, and the compilations became confused and inaccurate. Greek medicine survived better in the remains of the Eastern Roman Empire, and benefitted from the influence of Arab medicine. Intellectual revival, when it came to Europe, did so on the fringes of the Moslem world, and Montpellier and Salerno were among the first of the new medical centers. Rather than relying on ancient experts, the new experimental method reported the tested effects of substances from identified plants. This advance was fostered by the foundation of universities and greatly aided by the later invention of the printing press, which also allowed wider dissemination of the classical texts.
An Introduction to Item Response Theory for Patient-Reported Outcome Measurement
Nguyen, Tam H.; Han, Hae-Ra; Kim, Miyong T.
2015-01-01
The growing emphasis on patient-centered care has accelerated the demand for high-quality data from patient-reported outcome (PRO) measures. Traditionally, the development and validation of these measures has been guided by classical test theory. However, item response theory (IRT), an alternate measurement framework, offers promise for addressing practical measurement problems found in health-related research that have been difficult to solve through classical methods. This paper introduces foundational concepts in IRT, as well as commonly used models and their assumptions. Existing data on a combined sample (n = 636) of Korean American and Vietnamese American adults who responded to the High Blood Pressure Health Literacy Scale and the Patient Health Questionnaire-9 are used to exemplify typical applications of IRT. These examples illustrate how IRT can be used to improve the development, refinement, and evaluation of PRO measures. Greater use of methods based on this framework can increase the accuracy and efficiency with which PROs are measured. PMID:24403095
Benefits of rotational ground motions for planetary seismology
NASA Astrophysics Data System (ADS)
Donner, S.; Joshi, R.; Hadziioannou, C.; Nunn, C.; van Driel, M.; Schmelzbach, C.; Wassermann, J. M.; Igel, H.
2017-12-01
Exploring the internal structure of planetary objects is fundamental to understand the evolution of our solar system. In contrast to Earth, planetary seismology is hampered by the limited number of stations available, often just a single one. Classic seismology is based on the measurement of three components of translational ground motion. Its methods are mainly developed for a larger number of available stations. Therefore, the application of classical seismological methods to other planets is very limited. Here, we show that the additional measurement of three components of rotational ground motion could substantially improve the situation. From sparse or single station networks measuring translational and rotational ground motions it is possible to obtain additional information on structure and source. This includes direct information on local subsurface seismic velocities, separation of seismic phases, propagation direction of seismic energy, crustal scattering properties, as well as moment tensor source parameters for regional sources. The potential of this methodology will be highlighted through synthetic forward and inverse modeling experiments.
Raskin, Cody; Owen, J. Michael
2016-10-24
Here, we discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extensionmore » of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.« less
Research on Palmprint Identification Method Based on Quantum Algorithms
Zhang, Zhanzhan
2014-01-01
Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165
An introduction to item response theory for patient-reported outcome measurement.
Nguyen, Tam H; Han, Hae-Ra; Kim, Miyong T; Chan, Kitty S
2014-01-01
The growing emphasis on patient-centered care has accelerated the demand for high-quality data from patient-reported outcome (PRO) measures. Traditionally, the development and validation of these measures has been guided by classical test theory. However, item response theory (IRT), an alternate measurement framework, offers promise for addressing practical measurement problems found in health-related research that have been difficult to solve through classical methods. This paper introduces foundational concepts in IRT, as well as commonly used models and their assumptions. Existing data on a combined sample (n = 636) of Korean American and Vietnamese American adults who responded to the High Blood Pressure Health Literacy Scale and the Patient Health Questionnaire-9 are used to exemplify typical applications of IRT. These examples illustrate how IRT can be used to improve the development, refinement, and evaluation of PRO measures. Greater use of methods based on this framework can increase the accuracy and efficiency with which PROs are measured.
NASA Astrophysics Data System (ADS)
Dumitrica, Traian; Hourahine, Ben; Aradi, Balint; Frauenheim, Thomas
We discus the coupling of the objective boundary conditions into the SCC density functional-based tight binding code DFTB+. The implementation is enabled by a generalization to the helical case of the classical Ewald method, specifically by Ewald-like formulas that do not rely on a unit cell with translational symmetry. The robustness of the method in addressing complex hetero-nuclear nano- and bio-fibrous systems is demonstrated with illustrative simulations on a helical boron nitride nanotube, a screw dislocated zinc oxide nanowire, and an ideal double-strand DNA. Work supported by NSF CMMI 1332228.
Full statistical mode reconstruction of a light field via a photon-number-resolved measurement
NASA Astrophysics Data System (ADS)
Burenkov, I. A.; Sharma, A. K.; Gerrits, T.; Harder, G.; Bartley, T. J.; Silberhorn, C.; Goldschmidt, E. A.; Polyakov, S. V.
2017-05-01
We present a method to reconstruct the complete statistical mode structure and optical losses of multimode conjugated optical fields using an experimentally measured joint photon-number probability distribution. We demonstrate that this method evaluates classical and nonclassical properties using a single measurement technique and is well suited for quantum mesoscopic state characterization. We obtain a nearly perfect reconstruction of a field comprised of up to ten modes based on a minimal set of assumptions. To show the utility of this method, we use it to reconstruct the mode structure of an unknown bright parametric down-conversion source.
Domain Decomposition Algorithms for First-Order System Least Squares Methods
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.
Ablative Thermal Response Analysis Using the Finite Element Method
NASA Technical Reports Server (NTRS)
Dec John A.; Braun, Robert D.
2009-01-01
A review of the classic techniques used to solve ablative thermal response problems is presented. The advantages and disadvantages of both the finite element and finite difference methods are described. As a first step in developing a three dimensional finite element based ablative thermal response capability, a one dimensional computer tool has been developed. The finite element method is used to discretize the governing differential equations and Galerkin's method of weighted residuals is used to derive the element equations. A code to code comparison between the current 1-D tool and the 1-D Fully Implicit Ablation and Thermal Response Program (FIAT) has been performed.
Mancier, Valérie; Leclercq, Didier
2007-02-01
Two new determination methods of the power dissipated in an aqueous medium by an ultrasound generator were developed. They are based on the use of a heat flow sensor inserted between a tank and a heat sink that allows to measure the power directly coming through the sensor. To be exploitable, the first method requires waiting for stationary flow. On the other hand, the second, extrapolated from the first one, makes it possible to determine the dissipated power in only five minutes. Finally, the results obtained with the flowmetric method are compared to the classical calorimetric ones.
Zapater, E; Moreno, S; Fortea, M A; Campos, A; Armengot, M; Basterra, J
2000-11-01
Many studies have investigated prognostic factors in laryngeal carcinoma, with sometimes conflicting results. Apart from the importance of environmental factors, the different statistical methods employed may have influenced such discrepancies. A program based on artificial intelligence techniques is designed to determine the prognostic factors in a series of 122 laryngeal carcinomas. The results obtained are compared with those derived from two classical statistical methods (Cox regression and mortality tables). Tumor location was found to be the most important prognostic factor by all methods. The proposed intelligent system is found to be a sound method capable of detecting exceptional cases.
Teaching Semantic Tableaux Method for Propositional Classical Logic with a CAS
ERIC Educational Resources Information Center
Aguilera-Venegas, Gabriel; Galán-García, José Luis; Galán-García, María Ángeles; Rodríguez-Cielos, Pedro
2015-01-01
Automated theorem proving (ATP) for Propositional Classical Logic is an algorithm to check the validity of a formula. It is a very well-known problem which is decidable but co-NP-complete. There are many algorithms for this problem. In this paper, an educationally oriented implementation of Semantic Tableaux method is described. The program has…
DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.
2012-01-01
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694
A multiscale quantum mechanics/electromagnetics method for device simulations.
Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua
2015-04-07
Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.
Non-classicality criteria: Glauber-Sudarshan P function and Mandel ? parameter
NASA Astrophysics Data System (ADS)
Alexanian, Moorad
2018-01-01
We calculate exactly the quantum mechanical, temporal Wigner quasiprobability density for a single-mode, degenerate parametric amplifier for a system in the Gaussian state, viz., a displaced-squeezed thermal state. The Wigner function allows us to calculate the fluctuations in photon number and the quadrature variance. We contrast the difference between the non-classicality criteria, which is independent of the displacement parameter ?, based on the Glauber-Sudarshan quasiprobability distribution ? and the classical/non-classical behaviour of the Mandel ? parameter, which depends strongly on ?. We find a phase transition as a function of ? such that at the critical point ?, ?, as a function of ?, goes from strictly classical, for ?, to a mixed classical/non-classical behaviour, for ?.
NASA Astrophysics Data System (ADS)
Nehar, K. C.; Hachi, B. E.; Cazes, F.; Haboussi, M.
2017-12-01
The aim of the present work is to investigate the numerical modeling of interfacial cracks that may appear at the interface between two isotropic elastic materials. The extended finite element method is employed to analyze brittle and bi-material interfacial fatigue crack growth by computing the mixed mode stress intensity factors (SIF). Three different approaches are introduced to compute the SIFs. In the first one, mixed mode SIF is deduced from the computation of the contour integral as per the classical J-integral method, whereas a displacement method is used to evaluate the SIF by using either one or two displacement jumps located along the crack path in the second and third approaches. The displacement jump method is rather classical for mono-materials, but has to our knowledge not been used up to now for a bi-material. Hence, use of displacement jump for characterizing bi-material cracks constitutes the main contribution of the present study. Several benchmark tests including parametric studies are performed to show the effectiveness of these computational methodologies for SIF considering static and fatigue problems of bi-material structures. It is found that results based on the displacement jump methods are in a very good agreement with those of exact solutions, such as for the J-integral method, but with a larger domain of applicability and a better numerical efficiency (less time consuming and less spurious boundary effect).
Nonlinear damage detection in composite structures using bispectral analysis
NASA Astrophysics Data System (ADS)
Ciampa, Francesco; Pickering, Simon; Scarselli, Gennaro; Meo, Michele
2014-03-01
Literature offers a quantitative number of diagnostic methods that can continuously provide detailed information of the material defects and damages in aerospace and civil engineering applications. Indeed, low velocity impact damages can considerably degrade the integrity of structural components and, if not detected, they can result in catastrophic failure conditions. This paper presents a nonlinear Structural Health Monitoring (SHM) method, based on ultrasonic guided waves (GW), for the detection of the nonlinear signature in a damaged composite structure. The proposed technique, based on a bispectral analysis of ultrasonic input waveforms, allows for the evaluation of the nonlinear response due to the presence of cracks and delaminations. Indeed, such a methodology was used to characterize the nonlinear behaviour of the structure, by exploiting the frequency mixing of the original waveform acquired from a sparse array of sensors. The robustness of bispectral analysis was experimentally demonstrated on a damaged carbon fibre reinforce plastic (CFRP) composite panel, and the nonlinear source was retrieved with a high level of accuracy. Unlike other linear and nonlinear ultrasonic methods for damage detection, this methodology does not require any baseline with the undamaged structure for the evaluation of the nonlinear source, nor a priori knowledge of the mechanical properties of the specimen. Moreover, bispectral analysis can be considered as a nonlinear elastic wave spectroscopy (NEWS) technique for materials showing either classical or non-classical nonlinear behaviour.
Parameter estimation for stiff deterministic dynamical systems via ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki
2014-10-01
A commonly encountered problem in numerous areas of applications is to estimate the unknown coefficients of a dynamical system from direct or indirect observations at discrete times of some of the components of the state vector. A related problem is to estimate unobserved components of the state. An egregious example of such a problem is provided by metabolic models, in which the numerous model parameters and the concentrations of the metabolites in tissue are to be estimated from concentration data in the blood. A popular method for addressing similar questions in stochastic and turbulent dynamics is the ensemble Kalman filter (EnKF), a particle-based filtering method that generalizes classical Kalman filtering. In this work, we adapt the EnKF algorithm for deterministic systems in which the numerical approximation error is interpreted as a stochastic drift with variance based on classical error estimates of numerical integrators. This approach, which is particularly suitable for stiff systems where the stiffness may depend on the parameters, allows us to effectively exploit the parallel nature of particle methods. Moreover, we demonstrate how spatial prior information about the state vector, which helps the stability of the computed solution, can be incorporated into the filter. The viability of the approach is shown by computed examples, including a metabolic system modeling an ischemic episode in skeletal muscle, with a high number of unknown parameters.
NASA Technical Reports Server (NTRS)
Kraft, Ralph P.; Burrows, David N.; Nousek, John A.
1991-01-01
Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.
Creating Very True Quantum Algorithms for Quantum Energy Based Computing
NASA Astrophysics Data System (ADS)
Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed; Diep, Do Ngoc
2018-04-01
An interpretation of quantum mechanics is discussed. It is assumed that quantum is energy. An algorithm by means of the energy interpretation is discussed. An algorithm, based on the energy interpretation, for fast determining a homogeneous linear function f( x) := s. x = s 1 x 1 + s 2 x 2 + ⋯ + s N x N is proposed. Here x = ( x 1, … , x N ), x j ∈ R and the coefficients s = ( s 1, … , s N ), s j ∈ N. Given the interpolation values (f(1), f(2),...,f(N))=ěc {y}, the unknown coefficients s = (s1(ěc {y}),\\dots , sN(ěc {y})) of the linear function shall be determined, simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Our method is based on the generalized Bernstein-Vazirani algorithm to qudit systems. Next, by using M parallel quantum systems, M homogeneous linear functions are determined, simultaneously. The speed of obtaining the set of M homogeneous linear functions is shown to outperform the classical case by a factor of N × M.
Creating Very True Quantum Algorithms for Quantum Energy Based Computing
NASA Astrophysics Data System (ADS)
Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed; Diep, Do Ngoc
2017-12-01
An interpretation of quantum mechanics is discussed. It is assumed that quantum is energy. An algorithm by means of the energy interpretation is discussed. An algorithm, based on the energy interpretation, for fast determining a homogeneous linear function f(x) := s.x = s 1 x 1 + s 2 x 2 + ⋯ + s N x N is proposed. Here x = (x 1, … , x N ), x j ∈ R and the coefficients s = (s 1, … , s N ), s j ∈ N. Given the interpolation values (f(1), f(2),...,f(N))=ěc {y}, the unknown coefficients s = (s1(ěc {y}),\\dots , sN(ěc {y})) of the linear function shall be determined, simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Our method is based on the generalized Bernstein-Vazirani algorithm to qudit systems. Next, by using M parallel quantum systems, M homogeneous linear functions are determined, simultaneously. The speed of obtaining the set of M homogeneous linear functions is shown to outperform the classical case by a factor of N × M.
ERIC Educational Resources Information Center
Billstedt, Eva; Gillberg, I. Carina; Gillberg, Christopher
2007-01-01
Background: Few studies have looked at the very long-term outcome of individuals with autism who were diagnosed in childhood. Methods: A longitudinal, prospective, community-based follow-up study of adults who had received the diagnosis of autism (classic and atypical) in childhood (n = 105) was conducted. A structured interview (the Diagnostic…
ERIC Educational Resources Information Center
Sinharay, Sandip
2010-01-01
Recently, there has been an increasing level of interest in subscores for their potential diagnostic value. Haberman (2008) suggested a method based on classical test theory to determine whether subscores have added value over total scores. This paper provides a literature review and reports when subscores were found to have added value for…
Rosetta Phase II: Measuring and Interpreting Cultural Differences in Cognition
2008-07-31
approaches are used to capture culture. First, anthropology and psychiatry adopt research methods that focus on specific groups or individuals...Classical anthropology provides information about behaviors, customs, social roles, and social rules based on extended and intense observation of single...This training goes beyond rules and procedures so that military personnel can see events through the eyes of adversaries or host nationals. They must